00:00:00.000 Started by upstream project "autotest-per-patch" build number 126238 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.070 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.071 The recommended git tool is: git 00:00:00.071 using credential 00000000-0000-0000-0000-000000000002 00:00:00.073 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.110 Fetching changes from the remote Git repository 00:00:00.114 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.164 Using shallow fetch with depth 1 00:00:00.164 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.164 > git --version # timeout=10 00:00:00.201 > git --version # 'git version 2.39.2' 00:00:00.201 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.228 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.229 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.960 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.976 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.989 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:04.989 > git config core.sparsecheckout # timeout=10 00:00:05.002 > git read-tree -mu HEAD # timeout=10 00:00:05.020 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:05.043 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:05.043 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:05.137 [Pipeline] Start of Pipeline 00:00:05.151 [Pipeline] library 00:00:05.152 Loading library shm_lib@master 00:00:05.152 Library shm_lib@master is cached. Copying from home. 00:00:05.168 [Pipeline] node 00:00:05.175 Running on GP2 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.178 [Pipeline] { 00:00:05.187 [Pipeline] catchError 00:00:05.188 [Pipeline] { 00:00:05.198 [Pipeline] wrap 00:00:05.206 [Pipeline] { 00:00:05.213 [Pipeline] stage 00:00:05.214 [Pipeline] { (Prologue) 00:00:05.382 [Pipeline] sh 00:00:05.664 + logger -p user.info -t JENKINS-CI 00:00:05.679 [Pipeline] echo 00:00:05.680 Node: GP2 00:00:05.687 [Pipeline] sh 00:00:05.982 [Pipeline] setCustomBuildProperty 00:00:05.990 [Pipeline] echo 00:00:05.991 Cleanup processes 00:00:05.994 [Pipeline] sh 00:00:06.273 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.273 175394 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.286 [Pipeline] sh 00:00:06.570 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.570 ++ grep -v 'sudo pgrep' 00:00:06.570 ++ awk '{print $1}' 00:00:06.570 + sudo kill -9 00:00:06.570 + true 00:00:06.582 [Pipeline] cleanWs 00:00:06.591 [WS-CLEANUP] Deleting project workspace... 00:00:06.591 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.597 [WS-CLEANUP] done 00:00:06.602 [Pipeline] setCustomBuildProperty 00:00:06.616 [Pipeline] sh 00:00:06.942 + sudo git config --global --replace-all safe.directory '*' 00:00:07.019 [Pipeline] httpRequest 00:00:07.060 [Pipeline] echo 00:00:07.061 Sorcerer 10.211.164.101 is alive 00:00:07.067 [Pipeline] httpRequest 00:00:07.071 HttpMethod: GET 00:00:07.071 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.073 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.098 Response Code: HTTP/1.1 200 OK 00:00:07.099 Success: Status code 200 is in the accepted range: 200,404 00:00:07.099 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:32.678 [Pipeline] sh 00:00:32.960 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:32.977 [Pipeline] httpRequest 00:00:33.007 [Pipeline] echo 00:00:33.009 Sorcerer 10.211.164.101 is alive 00:00:33.018 [Pipeline] httpRequest 00:00:33.023 HttpMethod: GET 00:00:33.024 URL: http://10.211.164.101/packages/spdk_996bd8752099a6dcd6e8785d9f9d0e1e2210ec8a.tar.gz 00:00:33.025 Sending request to url: http://10.211.164.101/packages/spdk_996bd8752099a6dcd6e8785d9f9d0e1e2210ec8a.tar.gz 00:00:33.042 Response Code: HTTP/1.1 200 OK 00:00:33.042 Success: Status code 200 is in the accepted range: 200,404 00:00:33.043 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_996bd8752099a6dcd6e8785d9f9d0e1e2210ec8a.tar.gz 00:01:01.306 [Pipeline] sh 00:01:01.598 + tar --no-same-owner -xf spdk_996bd8752099a6dcd6e8785d9f9d0e1e2210ec8a.tar.gz 00:01:04.929 [Pipeline] sh 00:01:05.213 + git -C spdk log --oneline -n5 00:01:05.213 996bd8752 blob: Fix spdk_bs_blob_decouple_parent when blob's ancestor is an esnap. 00:01:05.213 a95bbf233 blob: set parent_id properly on spdk_bs_blob_set_external_parent. 00:01:05.213 248c547d0 nvmf/tcp: add option for selecting a sock impl 00:01:05.213 2d30d9f83 accel: introduce tasks in sequence limit 00:01:05.213 2728651ee accel: adjust task per ch define name 00:01:05.226 [Pipeline] } 00:01:05.244 [Pipeline] // stage 00:01:05.253 [Pipeline] stage 00:01:05.255 [Pipeline] { (Prepare) 00:01:05.272 [Pipeline] writeFile 00:01:05.289 [Pipeline] sh 00:01:05.571 + logger -p user.info -t JENKINS-CI 00:01:05.584 [Pipeline] sh 00:01:05.869 + logger -p user.info -t JENKINS-CI 00:01:05.883 [Pipeline] sh 00:01:06.168 + cat autorun-spdk.conf 00:01:06.168 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.168 SPDK_TEST_NVMF=1 00:01:06.168 SPDK_TEST_NVME_CLI=1 00:01:06.168 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:06.169 SPDK_TEST_NVMF_NICS=e810 00:01:06.169 SPDK_TEST_VFIOUSER=1 00:01:06.169 SPDK_RUN_UBSAN=1 00:01:06.169 NET_TYPE=phy 00:01:06.175 RUN_NIGHTLY=0 00:01:06.182 [Pipeline] readFile 00:01:06.208 [Pipeline] withEnv 00:01:06.210 [Pipeline] { 00:01:06.223 [Pipeline] sh 00:01:06.511 + set -ex 00:01:06.511 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:06.511 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:06.511 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.511 ++ SPDK_TEST_NVMF=1 00:01:06.511 ++ SPDK_TEST_NVME_CLI=1 00:01:06.511 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:06.511 ++ SPDK_TEST_NVMF_NICS=e810 00:01:06.511 ++ SPDK_TEST_VFIOUSER=1 00:01:06.511 ++ SPDK_RUN_UBSAN=1 00:01:06.511 ++ NET_TYPE=phy 00:01:06.511 ++ RUN_NIGHTLY=0 00:01:06.511 + case $SPDK_TEST_NVMF_NICS in 00:01:06.511 + DRIVERS=ice 00:01:06.511 + [[ tcp == \r\d\m\a ]] 00:01:06.511 + [[ -n ice ]] 00:01:06.511 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:06.511 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:13.130 rmmod: ERROR: Module irdma is not currently loaded 00:01:13.130 rmmod: ERROR: Module i40iw is not currently loaded 00:01:13.130 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:13.130 + true 00:01:13.130 + for D in $DRIVERS 00:01:13.130 + sudo modprobe ice 00:01:13.130 + exit 0 00:01:13.140 [Pipeline] } 00:01:13.161 [Pipeline] // withEnv 00:01:13.167 [Pipeline] } 00:01:13.182 [Pipeline] // stage 00:01:13.193 [Pipeline] catchError 00:01:13.194 [Pipeline] { 00:01:13.206 [Pipeline] timeout 00:01:13.206 Timeout set to expire in 50 min 00:01:13.207 [Pipeline] { 00:01:13.220 [Pipeline] stage 00:01:13.222 [Pipeline] { (Tests) 00:01:13.236 [Pipeline] sh 00:01:13.520 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.520 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.520 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.520 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:13.520 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.520 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:13.520 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:13.520 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:13.520 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:13.520 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:13.520 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:13.520 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.520 + source /etc/os-release 00:01:13.520 ++ NAME='Fedora Linux' 00:01:13.520 ++ VERSION='38 (Cloud Edition)' 00:01:13.520 ++ ID=fedora 00:01:13.520 ++ VERSION_ID=38 00:01:13.520 ++ VERSION_CODENAME= 00:01:13.520 ++ PLATFORM_ID=platform:f38 00:01:13.520 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:13.520 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:13.520 ++ LOGO=fedora-logo-icon 00:01:13.520 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:13.520 ++ HOME_URL=https://fedoraproject.org/ 00:01:13.520 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:13.520 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:13.520 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:13.520 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:13.520 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:13.520 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:13.520 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:13.520 ++ SUPPORT_END=2024-05-14 00:01:13.520 ++ VARIANT='Cloud Edition' 00:01:13.520 ++ VARIANT_ID=cloud 00:01:13.520 + uname -a 00:01:13.520 Linux spdk-gp-02 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:13.520 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:14.457 Hugepages 00:01:14.457 node hugesize free / total 00:01:14.457 node0 1048576kB 0 / 0 00:01:14.457 node0 2048kB 0 / 0 00:01:14.457 node1 1048576kB 0 / 0 00:01:14.457 node1 2048kB 0 / 0 00:01:14.457 00:01:14.457 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:14.457 I/OAT 0000:00:04.0 8086 3c20 0 ioatdma - - 00:01:14.457 I/OAT 0000:00:04.1 8086 3c21 0 ioatdma - - 00:01:14.457 I/OAT 0000:00:04.2 8086 3c22 0 ioatdma - - 00:01:14.457 I/OAT 0000:00:04.3 8086 3c23 0 ioatdma - - 00:01:14.457 I/OAT 0000:00:04.4 8086 3c24 0 ioatdma - - 00:01:14.457 I/OAT 0000:00:04.5 8086 3c25 0 ioatdma - - 00:01:14.457 I/OAT 0000:00:04.6 8086 3c26 0 ioatdma - - 00:01:14.457 I/OAT 0000:00:04.7 8086 3c27 0 ioatdma - - 00:01:14.457 I/OAT 0000:80:04.0 8086 3c20 1 ioatdma - - 00:01:14.457 I/OAT 0000:80:04.1 8086 3c21 1 ioatdma - - 00:01:14.457 I/OAT 0000:80:04.2 8086 3c22 1 ioatdma - - 00:01:14.457 I/OAT 0000:80:04.3 8086 3c23 1 ioatdma - - 00:01:14.457 I/OAT 0000:80:04.4 8086 3c24 1 ioatdma - - 00:01:14.457 I/OAT 0000:80:04.5 8086 3c25 1 ioatdma - - 00:01:14.457 I/OAT 0000:80:04.6 8086 3c26 1 ioatdma - - 00:01:14.457 I/OAT 0000:80:04.7 8086 3c27 1 ioatdma - - 00:01:14.457 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:14.457 + rm -f /tmp/spdk-ld-path 00:01:14.457 + source autorun-spdk.conf 00:01:14.457 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.457 ++ SPDK_TEST_NVMF=1 00:01:14.457 ++ SPDK_TEST_NVME_CLI=1 00:01:14.457 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.457 ++ SPDK_TEST_NVMF_NICS=e810 00:01:14.457 ++ SPDK_TEST_VFIOUSER=1 00:01:14.457 ++ SPDK_RUN_UBSAN=1 00:01:14.457 ++ NET_TYPE=phy 00:01:14.457 ++ RUN_NIGHTLY=0 00:01:14.457 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:14.457 + [[ -n '' ]] 00:01:14.457 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:14.457 + for M in /var/spdk/build-*-manifest.txt 00:01:14.457 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:14.457 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:14.457 + for M in /var/spdk/build-*-manifest.txt 00:01:14.457 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:14.457 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:14.457 ++ uname 00:01:14.457 + [[ Linux == \L\i\n\u\x ]] 00:01:14.457 + sudo dmesg -T 00:01:14.457 + sudo dmesg --clear 00:01:14.457 + dmesg_pid=175973 00:01:14.457 + [[ Fedora Linux == FreeBSD ]] 00:01:14.457 + sudo dmesg -Tw 00:01:14.457 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.457 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.457 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:14.457 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:14.457 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:14.457 + [[ -x /usr/src/fio-static/fio ]] 00:01:14.457 + export FIO_BIN=/usr/src/fio-static/fio 00:01:14.457 + FIO_BIN=/usr/src/fio-static/fio 00:01:14.457 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:14.457 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:14.457 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:14.457 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:14.457 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:14.457 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:14.457 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:14.457 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:14.457 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:14.457 Test configuration: 00:01:14.457 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.457 SPDK_TEST_NVMF=1 00:01:14.457 SPDK_TEST_NVME_CLI=1 00:01:14.457 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.457 SPDK_TEST_NVMF_NICS=e810 00:01:14.457 SPDK_TEST_VFIOUSER=1 00:01:14.457 SPDK_RUN_UBSAN=1 00:01:14.457 NET_TYPE=phy 00:01:14.716 RUN_NIGHTLY=0 21:23:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:14.716 21:23:05 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:14.716 21:23:05 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:14.716 21:23:05 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:14.716 21:23:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.716 21:23:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.716 21:23:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.716 21:23:05 -- paths/export.sh@5 -- $ export PATH 00:01:14.716 21:23:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.716 21:23:05 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:14.716 21:23:05 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:14.716 21:23:05 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721071385.XXXXXX 00:01:14.716 21:23:05 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721071385.ckuzBj 00:01:14.716 21:23:05 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:14.716 21:23:05 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:14.716 21:23:05 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:14.716 21:23:05 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:14.716 21:23:05 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:14.716 21:23:05 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:14.716 21:23:05 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:14.716 21:23:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.716 21:23:05 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:14.716 21:23:05 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:14.716 21:23:05 -- pm/common@17 -- $ local monitor 00:01:14.716 21:23:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.716 21:23:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.716 21:23:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.716 21:23:05 -- pm/common@21 -- $ date +%s 00:01:14.716 21:23:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.716 21:23:05 -- pm/common@21 -- $ date +%s 00:01:14.716 21:23:05 -- pm/common@25 -- $ sleep 1 00:01:14.716 21:23:05 -- pm/common@21 -- $ date +%s 00:01:14.716 21:23:05 -- pm/common@21 -- $ date +%s 00:01:14.716 21:23:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721071385 00:01:14.716 21:23:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721071385 00:01:14.716 21:23:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721071385 00:01:14.716 21:23:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721071385 00:01:14.716 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721071385_collect-vmstat.pm.log 00:01:14.716 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721071385_collect-cpu-load.pm.log 00:01:14.716 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721071385_collect-cpu-temp.pm.log 00:01:14.716 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721071385_collect-bmc-pm.bmc.pm.log 00:01:15.655 21:23:06 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:15.655 21:23:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:15.655 21:23:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:15.655 21:23:06 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:15.655 21:23:06 -- spdk/autobuild.sh@16 -- $ date -u 00:01:15.655 Mon Jul 15 07:23:06 PM UTC 2024 00:01:15.655 21:23:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:15.655 v24.09-pre-210-g996bd8752 00:01:15.655 21:23:06 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:15.655 21:23:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:15.655 21:23:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:15.655 21:23:06 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:15.655 21:23:06 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:15.655 21:23:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.655 ************************************ 00:01:15.655 START TEST ubsan 00:01:15.655 ************************************ 00:01:15.655 21:23:06 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:15.655 using ubsan 00:01:15.655 00:01:15.655 real 0m0.000s 00:01:15.655 user 0m0.000s 00:01:15.655 sys 0m0.000s 00:01:15.655 21:23:06 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:15.655 21:23:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:15.655 ************************************ 00:01:15.655 END TEST ubsan 00:01:15.655 ************************************ 00:01:15.655 21:23:06 -- common/autotest_common.sh@1142 -- $ return 0 00:01:15.655 21:23:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:15.655 21:23:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:15.655 21:23:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:15.655 21:23:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:15.655 21:23:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:15.655 21:23:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:15.655 21:23:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:15.655 21:23:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:15.655 21:23:06 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:15.916 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:15.916 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:16.176 Using 'verbs' RDMA provider 00:01:26.736 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:38.956 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:38.956 Creating mk/config.mk...done. 00:01:38.956 Creating mk/cc.flags.mk...done. 00:01:38.956 Type 'make' to build. 00:01:38.956 21:23:28 -- spdk/autobuild.sh@69 -- $ run_test make make -j32 00:01:38.956 21:23:28 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:38.956 21:23:28 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:38.956 21:23:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.956 ************************************ 00:01:38.956 START TEST make 00:01:38.956 ************************************ 00:01:38.956 21:23:28 make -- common/autotest_common.sh@1123 -- $ make -j32 00:01:38.956 make[1]: Nothing to be done for 'all'. 00:01:39.528 The Meson build system 00:01:39.528 Version: 1.3.1 00:01:39.528 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:39.528 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:39.528 Build type: native build 00:01:39.528 Project name: libvfio-user 00:01:39.528 Project version: 0.0.1 00:01:39.528 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:39.528 C linker for the host machine: cc ld.bfd 2.39-16 00:01:39.528 Host machine cpu family: x86_64 00:01:39.528 Host machine cpu: x86_64 00:01:39.528 Run-time dependency threads found: YES 00:01:39.528 Library dl found: YES 00:01:39.528 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:39.528 Run-time dependency json-c found: YES 0.17 00:01:39.528 Run-time dependency cmocka found: YES 1.1.7 00:01:39.528 Program pytest-3 found: NO 00:01:39.528 Program flake8 found: NO 00:01:39.528 Program misspell-fixer found: NO 00:01:39.528 Program restructuredtext-lint found: NO 00:01:39.528 Program valgrind found: YES (/usr/bin/valgrind) 00:01:39.528 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:39.528 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:39.528 Compiler for C supports arguments -Wwrite-strings: YES 00:01:39.528 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:39.528 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:39.528 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:39.528 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:39.528 Build targets in project: 8 00:01:39.528 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:39.528 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:39.528 00:01:39.528 libvfio-user 0.0.1 00:01:39.528 00:01:39.528 User defined options 00:01:39.528 buildtype : debug 00:01:39.529 default_library: shared 00:01:39.529 libdir : /usr/local/lib 00:01:39.529 00:01:39.529 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:40.474 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:40.474 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:40.474 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:40.474 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:40.474 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:40.474 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:40.474 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:40.474 [7/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:40.474 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:40.474 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:40.474 [10/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:40.474 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:40.474 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:40.474 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:40.474 [14/37] Compiling C object samples/null.p/null.c.o 00:01:40.474 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:40.474 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:40.739 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:40.739 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:40.739 [19/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:40.739 [20/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:40.739 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:40.739 [22/37] Compiling C object samples/client.p/client.c.o 00:01:40.739 [23/37] Compiling C object samples/server.p/server.c.o 00:01:40.739 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:40.739 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:40.739 [26/37] Linking target samples/client 00:01:40.739 [27/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:40.739 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:40.739 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:40.739 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:41.001 [31/37] Linking target test/unit_tests 00:01:41.001 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:41.001 [33/37] Linking target samples/lspci 00:01:41.001 [34/37] Linking target samples/null 00:01:41.001 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:41.001 [36/37] Linking target samples/gpio-pci-idio-16 00:01:41.001 [37/37] Linking target samples/server 00:01:41.001 INFO: autodetecting backend as ninja 00:01:41.001 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.264 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.842 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:41.842 ninja: no work to do. 00:01:48.428 The Meson build system 00:01:48.428 Version: 1.3.1 00:01:48.428 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:48.428 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:48.428 Build type: native build 00:01:48.428 Program cat found: YES (/usr/bin/cat) 00:01:48.428 Project name: DPDK 00:01:48.428 Project version: 24.03.0 00:01:48.428 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:48.428 C linker for the host machine: cc ld.bfd 2.39-16 00:01:48.428 Host machine cpu family: x86_64 00:01:48.428 Host machine cpu: x86_64 00:01:48.428 Message: ## Building in Developer Mode ## 00:01:48.428 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:48.428 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:48.428 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:48.428 Program python3 found: YES (/usr/bin/python3) 00:01:48.428 Program cat found: YES (/usr/bin/cat) 00:01:48.428 Compiler for C supports arguments -march=native: YES 00:01:48.428 Checking for size of "void *" : 8 00:01:48.428 Checking for size of "void *" : 8 (cached) 00:01:48.428 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:48.428 Library m found: YES 00:01:48.428 Library numa found: YES 00:01:48.428 Has header "numaif.h" : YES 00:01:48.428 Library fdt found: NO 00:01:48.428 Library execinfo found: NO 00:01:48.428 Has header "execinfo.h" : YES 00:01:48.428 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:48.428 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:48.428 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:48.428 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:48.428 Run-time dependency openssl found: YES 3.0.9 00:01:48.428 Run-time dependency libpcap found: YES 1.10.4 00:01:48.428 Has header "pcap.h" with dependency libpcap: YES 00:01:48.428 Compiler for C supports arguments -Wcast-qual: YES 00:01:48.428 Compiler for C supports arguments -Wdeprecated: YES 00:01:48.428 Compiler for C supports arguments -Wformat: YES 00:01:48.428 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:48.428 Compiler for C supports arguments -Wformat-security: NO 00:01:48.428 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:48.428 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:48.428 Compiler for C supports arguments -Wnested-externs: YES 00:01:48.428 Compiler for C supports arguments -Wold-style-definition: YES 00:01:48.428 Compiler for C supports arguments -Wpointer-arith: YES 00:01:48.428 Compiler for C supports arguments -Wsign-compare: YES 00:01:48.428 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:48.428 Compiler for C supports arguments -Wundef: YES 00:01:48.428 Compiler for C supports arguments -Wwrite-strings: YES 00:01:48.428 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:48.428 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:48.428 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:48.428 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:48.428 Program objdump found: YES (/usr/bin/objdump) 00:01:48.428 Compiler for C supports arguments -mavx512f: YES 00:01:48.428 Checking if "AVX512 checking" compiles: YES 00:01:48.428 Fetching value of define "__SSE4_2__" : 1 00:01:48.428 Fetching value of define "__AES__" : 1 00:01:48.428 Fetching value of define "__AVX__" : 1 00:01:48.428 Fetching value of define "__AVX2__" : (undefined) 00:01:48.428 Fetching value of define "__AVX512BW__" : (undefined) 00:01:48.428 Fetching value of define "__AVX512CD__" : (undefined) 00:01:48.428 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:48.428 Fetching value of define "__AVX512F__" : (undefined) 00:01:48.428 Fetching value of define "__AVX512VL__" : (undefined) 00:01:48.428 Fetching value of define "__PCLMUL__" : 1 00:01:48.428 Fetching value of define "__RDRND__" : (undefined) 00:01:48.428 Fetching value of define "__RDSEED__" : (undefined) 00:01:48.428 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:48.428 Fetching value of define "__znver1__" : (undefined) 00:01:48.428 Fetching value of define "__znver2__" : (undefined) 00:01:48.428 Fetching value of define "__znver3__" : (undefined) 00:01:48.428 Fetching value of define "__znver4__" : (undefined) 00:01:48.428 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:48.428 Message: lib/log: Defining dependency "log" 00:01:48.428 Message: lib/kvargs: Defining dependency "kvargs" 00:01:48.428 Message: lib/telemetry: Defining dependency "telemetry" 00:01:48.428 Checking for function "getentropy" : NO 00:01:48.428 Message: lib/eal: Defining dependency "eal" 00:01:48.428 Message: lib/ring: Defining dependency "ring" 00:01:48.428 Message: lib/rcu: Defining dependency "rcu" 00:01:48.428 Message: lib/mempool: Defining dependency "mempool" 00:01:48.428 Message: lib/mbuf: Defining dependency "mbuf" 00:01:48.428 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:48.428 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:48.428 Compiler for C supports arguments -mpclmul: YES 00:01:48.428 Compiler for C supports arguments -maes: YES 00:01:48.428 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:48.428 Compiler for C supports arguments -mavx512bw: YES 00:01:48.428 Compiler for C supports arguments -mavx512dq: YES 00:01:48.428 Compiler for C supports arguments -mavx512vl: YES 00:01:48.428 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:48.428 Compiler for C supports arguments -mavx2: YES 00:01:48.428 Compiler for C supports arguments -mavx: YES 00:01:48.428 Message: lib/net: Defining dependency "net" 00:01:48.428 Message: lib/meter: Defining dependency "meter" 00:01:48.428 Message: lib/ethdev: Defining dependency "ethdev" 00:01:48.428 Message: lib/pci: Defining dependency "pci" 00:01:48.428 Message: lib/cmdline: Defining dependency "cmdline" 00:01:48.428 Message: lib/hash: Defining dependency "hash" 00:01:48.428 Message: lib/timer: Defining dependency "timer" 00:01:48.428 Message: lib/compressdev: Defining dependency "compressdev" 00:01:48.428 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:48.428 Message: lib/dmadev: Defining dependency "dmadev" 00:01:48.428 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:48.428 Message: lib/power: Defining dependency "power" 00:01:48.428 Message: lib/reorder: Defining dependency "reorder" 00:01:48.428 Message: lib/security: Defining dependency "security" 00:01:48.428 Has header "linux/userfaultfd.h" : YES 00:01:48.428 Has header "linux/vduse.h" : YES 00:01:48.428 Message: lib/vhost: Defining dependency "vhost" 00:01:48.428 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:48.428 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:48.428 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:48.428 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:48.428 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:48.428 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:48.428 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:48.428 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:48.428 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:48.428 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:48.428 Program doxygen found: YES (/usr/bin/doxygen) 00:01:48.428 Configuring doxy-api-html.conf using configuration 00:01:48.428 Configuring doxy-api-man.conf using configuration 00:01:48.428 Program mandb found: YES (/usr/bin/mandb) 00:01:48.428 Program sphinx-build found: NO 00:01:48.428 Configuring rte_build_config.h using configuration 00:01:48.428 Message: 00:01:48.428 ================= 00:01:48.428 Applications Enabled 00:01:48.428 ================= 00:01:48.428 00:01:48.428 apps: 00:01:48.428 00:01:48.428 00:01:48.428 Message: 00:01:48.428 ================= 00:01:48.428 Libraries Enabled 00:01:48.428 ================= 00:01:48.428 00:01:48.428 libs: 00:01:48.428 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:48.428 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:48.428 cryptodev, dmadev, power, reorder, security, vhost, 00:01:48.428 00:01:48.428 Message: 00:01:48.428 =============== 00:01:48.428 Drivers Enabled 00:01:48.428 =============== 00:01:48.428 00:01:48.428 common: 00:01:48.428 00:01:48.428 bus: 00:01:48.429 pci, vdev, 00:01:48.429 mempool: 00:01:48.429 ring, 00:01:48.429 dma: 00:01:48.429 00:01:48.429 net: 00:01:48.429 00:01:48.429 crypto: 00:01:48.429 00:01:48.429 compress: 00:01:48.429 00:01:48.429 vdpa: 00:01:48.429 00:01:48.429 00:01:48.429 Message: 00:01:48.429 ================= 00:01:48.429 Content Skipped 00:01:48.429 ================= 00:01:48.429 00:01:48.429 apps: 00:01:48.429 dumpcap: explicitly disabled via build config 00:01:48.429 graph: explicitly disabled via build config 00:01:48.429 pdump: explicitly disabled via build config 00:01:48.429 proc-info: explicitly disabled via build config 00:01:48.429 test-acl: explicitly disabled via build config 00:01:48.429 test-bbdev: explicitly disabled via build config 00:01:48.429 test-cmdline: explicitly disabled via build config 00:01:48.429 test-compress-perf: explicitly disabled via build config 00:01:48.429 test-crypto-perf: explicitly disabled via build config 00:01:48.429 test-dma-perf: explicitly disabled via build config 00:01:48.429 test-eventdev: explicitly disabled via build config 00:01:48.429 test-fib: explicitly disabled via build config 00:01:48.429 test-flow-perf: explicitly disabled via build config 00:01:48.429 test-gpudev: explicitly disabled via build config 00:01:48.429 test-mldev: explicitly disabled via build config 00:01:48.429 test-pipeline: explicitly disabled via build config 00:01:48.429 test-pmd: explicitly disabled via build config 00:01:48.429 test-regex: explicitly disabled via build config 00:01:48.429 test-sad: explicitly disabled via build config 00:01:48.429 test-security-perf: explicitly disabled via build config 00:01:48.429 00:01:48.429 libs: 00:01:48.429 argparse: explicitly disabled via build config 00:01:48.429 metrics: explicitly disabled via build config 00:01:48.429 acl: explicitly disabled via build config 00:01:48.429 bbdev: explicitly disabled via build config 00:01:48.429 bitratestats: explicitly disabled via build config 00:01:48.429 bpf: explicitly disabled via build config 00:01:48.429 cfgfile: explicitly disabled via build config 00:01:48.429 distributor: explicitly disabled via build config 00:01:48.429 efd: explicitly disabled via build config 00:01:48.429 eventdev: explicitly disabled via build config 00:01:48.429 dispatcher: explicitly disabled via build config 00:01:48.429 gpudev: explicitly disabled via build config 00:01:48.429 gro: explicitly disabled via build config 00:01:48.429 gso: explicitly disabled via build config 00:01:48.429 ip_frag: explicitly disabled via build config 00:01:48.429 jobstats: explicitly disabled via build config 00:01:48.429 latencystats: explicitly disabled via build config 00:01:48.429 lpm: explicitly disabled via build config 00:01:48.429 member: explicitly disabled via build config 00:01:48.429 pcapng: explicitly disabled via build config 00:01:48.429 rawdev: explicitly disabled via build config 00:01:48.429 regexdev: explicitly disabled via build config 00:01:48.429 mldev: explicitly disabled via build config 00:01:48.429 rib: explicitly disabled via build config 00:01:48.429 sched: explicitly disabled via build config 00:01:48.429 stack: explicitly disabled via build config 00:01:48.429 ipsec: explicitly disabled via build config 00:01:48.429 pdcp: explicitly disabled via build config 00:01:48.429 fib: explicitly disabled via build config 00:01:48.429 port: explicitly disabled via build config 00:01:48.429 pdump: explicitly disabled via build config 00:01:48.429 table: explicitly disabled via build config 00:01:48.429 pipeline: explicitly disabled via build config 00:01:48.429 graph: explicitly disabled via build config 00:01:48.429 node: explicitly disabled via build config 00:01:48.429 00:01:48.429 drivers: 00:01:48.429 common/cpt: not in enabled drivers build config 00:01:48.429 common/dpaax: not in enabled drivers build config 00:01:48.429 common/iavf: not in enabled drivers build config 00:01:48.429 common/idpf: not in enabled drivers build config 00:01:48.429 common/ionic: not in enabled drivers build config 00:01:48.429 common/mvep: not in enabled drivers build config 00:01:48.429 common/octeontx: not in enabled drivers build config 00:01:48.429 bus/auxiliary: not in enabled drivers build config 00:01:48.429 bus/cdx: not in enabled drivers build config 00:01:48.429 bus/dpaa: not in enabled drivers build config 00:01:48.429 bus/fslmc: not in enabled drivers build config 00:01:48.429 bus/ifpga: not in enabled drivers build config 00:01:48.429 bus/platform: not in enabled drivers build config 00:01:48.429 bus/uacce: not in enabled drivers build config 00:01:48.429 bus/vmbus: not in enabled drivers build config 00:01:48.429 common/cnxk: not in enabled drivers build config 00:01:48.429 common/mlx5: not in enabled drivers build config 00:01:48.429 common/nfp: not in enabled drivers build config 00:01:48.429 common/nitrox: not in enabled drivers build config 00:01:48.429 common/qat: not in enabled drivers build config 00:01:48.429 common/sfc_efx: not in enabled drivers build config 00:01:48.429 mempool/bucket: not in enabled drivers build config 00:01:48.429 mempool/cnxk: not in enabled drivers build config 00:01:48.429 mempool/dpaa: not in enabled drivers build config 00:01:48.429 mempool/dpaa2: not in enabled drivers build config 00:01:48.429 mempool/octeontx: not in enabled drivers build config 00:01:48.429 mempool/stack: not in enabled drivers build config 00:01:48.429 dma/cnxk: not in enabled drivers build config 00:01:48.429 dma/dpaa: not in enabled drivers build config 00:01:48.429 dma/dpaa2: not in enabled drivers build config 00:01:48.429 dma/hisilicon: not in enabled drivers build config 00:01:48.429 dma/idxd: not in enabled drivers build config 00:01:48.429 dma/ioat: not in enabled drivers build config 00:01:48.429 dma/skeleton: not in enabled drivers build config 00:01:48.429 net/af_packet: not in enabled drivers build config 00:01:48.429 net/af_xdp: not in enabled drivers build config 00:01:48.429 net/ark: not in enabled drivers build config 00:01:48.429 net/atlantic: not in enabled drivers build config 00:01:48.429 net/avp: not in enabled drivers build config 00:01:48.429 net/axgbe: not in enabled drivers build config 00:01:48.429 net/bnx2x: not in enabled drivers build config 00:01:48.429 net/bnxt: not in enabled drivers build config 00:01:48.429 net/bonding: not in enabled drivers build config 00:01:48.429 net/cnxk: not in enabled drivers build config 00:01:48.429 net/cpfl: not in enabled drivers build config 00:01:48.429 net/cxgbe: not in enabled drivers build config 00:01:48.429 net/dpaa: not in enabled drivers build config 00:01:48.429 net/dpaa2: not in enabled drivers build config 00:01:48.429 net/e1000: not in enabled drivers build config 00:01:48.429 net/ena: not in enabled drivers build config 00:01:48.429 net/enetc: not in enabled drivers build config 00:01:48.429 net/enetfec: not in enabled drivers build config 00:01:48.429 net/enic: not in enabled drivers build config 00:01:48.429 net/failsafe: not in enabled drivers build config 00:01:48.429 net/fm10k: not in enabled drivers build config 00:01:48.429 net/gve: not in enabled drivers build config 00:01:48.429 net/hinic: not in enabled drivers build config 00:01:48.429 net/hns3: not in enabled drivers build config 00:01:48.429 net/i40e: not in enabled drivers build config 00:01:48.429 net/iavf: not in enabled drivers build config 00:01:48.429 net/ice: not in enabled drivers build config 00:01:48.429 net/idpf: not in enabled drivers build config 00:01:48.429 net/igc: not in enabled drivers build config 00:01:48.429 net/ionic: not in enabled drivers build config 00:01:48.429 net/ipn3ke: not in enabled drivers build config 00:01:48.429 net/ixgbe: not in enabled drivers build config 00:01:48.429 net/mana: not in enabled drivers build config 00:01:48.429 net/memif: not in enabled drivers build config 00:01:48.429 net/mlx4: not in enabled drivers build config 00:01:48.429 net/mlx5: not in enabled drivers build config 00:01:48.429 net/mvneta: not in enabled drivers build config 00:01:48.429 net/mvpp2: not in enabled drivers build config 00:01:48.429 net/netvsc: not in enabled drivers build config 00:01:48.429 net/nfb: not in enabled drivers build config 00:01:48.429 net/nfp: not in enabled drivers build config 00:01:48.429 net/ngbe: not in enabled drivers build config 00:01:48.429 net/null: not in enabled drivers build config 00:01:48.429 net/octeontx: not in enabled drivers build config 00:01:48.429 net/octeon_ep: not in enabled drivers build config 00:01:48.429 net/pcap: not in enabled drivers build config 00:01:48.429 net/pfe: not in enabled drivers build config 00:01:48.429 net/qede: not in enabled drivers build config 00:01:48.429 net/ring: not in enabled drivers build config 00:01:48.429 net/sfc: not in enabled drivers build config 00:01:48.429 net/softnic: not in enabled drivers build config 00:01:48.429 net/tap: not in enabled drivers build config 00:01:48.429 net/thunderx: not in enabled drivers build config 00:01:48.429 net/txgbe: not in enabled drivers build config 00:01:48.429 net/vdev_netvsc: not in enabled drivers build config 00:01:48.429 net/vhost: not in enabled drivers build config 00:01:48.429 net/virtio: not in enabled drivers build config 00:01:48.429 net/vmxnet3: not in enabled drivers build config 00:01:48.429 raw/*: missing internal dependency, "rawdev" 00:01:48.429 crypto/armv8: not in enabled drivers build config 00:01:48.429 crypto/bcmfs: not in enabled drivers build config 00:01:48.429 crypto/caam_jr: not in enabled drivers build config 00:01:48.429 crypto/ccp: not in enabled drivers build config 00:01:48.429 crypto/cnxk: not in enabled drivers build config 00:01:48.429 crypto/dpaa_sec: not in enabled drivers build config 00:01:48.429 crypto/dpaa2_sec: not in enabled drivers build config 00:01:48.429 crypto/ipsec_mb: not in enabled drivers build config 00:01:48.429 crypto/mlx5: not in enabled drivers build config 00:01:48.429 crypto/mvsam: not in enabled drivers build config 00:01:48.429 crypto/nitrox: not in enabled drivers build config 00:01:48.429 crypto/null: not in enabled drivers build config 00:01:48.429 crypto/octeontx: not in enabled drivers build config 00:01:48.429 crypto/openssl: not in enabled drivers build config 00:01:48.429 crypto/scheduler: not in enabled drivers build config 00:01:48.429 crypto/uadk: not in enabled drivers build config 00:01:48.429 crypto/virtio: not in enabled drivers build config 00:01:48.429 compress/isal: not in enabled drivers build config 00:01:48.429 compress/mlx5: not in enabled drivers build config 00:01:48.429 compress/nitrox: not in enabled drivers build config 00:01:48.429 compress/octeontx: not in enabled drivers build config 00:01:48.429 compress/zlib: not in enabled drivers build config 00:01:48.429 regex/*: missing internal dependency, "regexdev" 00:01:48.429 ml/*: missing internal dependency, "mldev" 00:01:48.429 vdpa/ifc: not in enabled drivers build config 00:01:48.429 vdpa/mlx5: not in enabled drivers build config 00:01:48.429 vdpa/nfp: not in enabled drivers build config 00:01:48.429 vdpa/sfc: not in enabled drivers build config 00:01:48.429 event/*: missing internal dependency, "eventdev" 00:01:48.429 baseband/*: missing internal dependency, "bbdev" 00:01:48.429 gpu/*: missing internal dependency, "gpudev" 00:01:48.429 00:01:48.429 00:01:48.429 Build targets in project: 85 00:01:48.429 00:01:48.429 DPDK 24.03.0 00:01:48.429 00:01:48.429 User defined options 00:01:48.429 buildtype : debug 00:01:48.430 default_library : shared 00:01:48.430 libdir : lib 00:01:48.430 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:48.430 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:48.430 c_link_args : 00:01:48.430 cpu_instruction_set: native 00:01:48.430 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:48.430 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:48.430 enable_docs : false 00:01:48.430 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:48.430 enable_kmods : false 00:01:48.430 max_lcores : 128 00:01:48.430 tests : false 00:01:48.430 00:01:48.430 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:48.430 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:48.430 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:48.430 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:48.430 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:48.430 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:48.430 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:48.430 [6/268] Linking static target lib/librte_kvargs.a 00:01:48.430 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:48.689 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:48.690 [9/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:48.690 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:48.690 [11/268] Linking static target lib/librte_log.a 00:01:48.690 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:48.690 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:48.690 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:49.262 [15/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.262 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:49.262 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:49.262 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:49.262 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:49.262 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:49.262 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:49.262 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:49.262 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:49.262 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:49.262 [25/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:49.534 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:49.534 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:49.534 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:49.534 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:49.534 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:49.534 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:49.534 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:49.534 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:49.534 [34/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:49.534 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:49.534 [36/268] Linking static target lib/librte_telemetry.a 00:01:49.534 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:49.534 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:49.534 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:49.534 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:49.534 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:49.534 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:49.797 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:49.797 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:49.797 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:49.797 [46/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.797 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:49.797 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:49.797 [49/268] Linking target lib/librte_log.so.24.1 00:01:49.797 [50/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:49.797 [51/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:50.056 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:50.318 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:50.318 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:50.318 [55/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:50.318 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:50.318 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:50.318 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:50.318 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:50.318 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:50.318 [61/268] Linking target lib/librte_kvargs.so.24.1 00:01:50.318 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:50.318 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:50.318 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:50.318 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:50.318 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:50.318 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:50.580 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:50.580 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:50.580 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:50.580 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:50.580 [72/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:50.580 [73/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.580 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:50.580 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:50.580 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:50.580 [77/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:50.580 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:50.580 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:50.580 [80/268] Linking target lib/librte_telemetry.so.24.1 00:01:50.580 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:50.841 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:50.841 [83/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:50.841 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:50.841 [85/268] Linking static target lib/librte_ring.a 00:01:51.102 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:51.102 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:51.102 [88/268] Linking static target lib/librte_eal.a 00:01:51.102 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:51.102 [90/268] Linking static target lib/librte_rcu.a 00:01:51.102 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:51.102 [92/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:51.102 [93/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:51.370 [94/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:51.370 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:51.370 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:51.370 [97/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:51.370 [98/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:51.370 [99/268] Linking static target lib/librte_meter.a 00:01:51.370 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:51.370 [101/268] Linking static target lib/librte_pci.a 00:01:51.370 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:51.370 [103/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:51.370 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:51.634 [105/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:51.634 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:51.634 [107/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.634 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:51.634 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:51.634 [110/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:51.634 [111/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:51.634 [112/268] Linking static target lib/librte_mempool.a 00:01:51.634 [113/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:51.634 [114/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:51.634 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:51.634 [116/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:51.634 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:51.634 [118/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:51.634 [119/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:51.634 [120/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:51.634 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:51.634 [122/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:51.634 [123/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:51.894 [124/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.894 [125/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.894 [126/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.894 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:51.894 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:51.894 [129/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:52.154 [130/268] Linking static target lib/librte_net.a 00:01:52.154 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:52.154 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:52.154 [133/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:52.154 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:52.154 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:52.154 [136/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:52.154 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:52.154 [138/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:52.154 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:52.417 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:52.417 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:52.417 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:52.417 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:52.417 [144/268] Linking static target lib/librte_cmdline.a 00:01:52.677 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:52.677 [146/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.677 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:52.677 [148/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:52.677 [149/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:52.677 [150/268] Linking static target lib/librte_timer.a 00:01:52.677 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:52.677 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:52.677 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:52.677 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:52.938 [155/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:52.938 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:52.938 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:53.200 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:53.200 [159/268] Linking static target lib/librte_dmadev.a 00:01:53.200 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:53.200 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:53.200 [162/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:53.200 [163/268] Linking static target lib/librte_mbuf.a 00:01:53.200 [164/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:53.200 [165/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.200 [166/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:53.200 [167/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:53.461 [168/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:53.461 [169/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:53.461 [170/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:53.461 [171/268] Linking static target lib/librte_hash.a 00:01:53.461 [172/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:53.461 [173/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:53.461 [174/268] Linking static target lib/librte_compressdev.a 00:01:53.461 [175/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.461 [176/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:53.461 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:53.461 [178/268] Linking static target lib/librte_power.a 00:01:53.723 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:53.723 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:53.982 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:53.982 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.982 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:53.982 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:53.982 [185/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:53.982 [186/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.982 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:53.982 [188/268] Linking static target lib/librte_reorder.a 00:01:53.982 [189/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.982 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:53.982 [191/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.982 [192/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:54.240 [193/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:54.240 [194/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:54.240 [195/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:54.240 [196/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.240 [197/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:54.240 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:54.240 [199/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.240 [200/268] Linking static target lib/librte_security.a 00:01:54.240 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:54.240 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:54.240 [203/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:54.240 [204/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:54.240 [205/268] Linking static target lib/librte_ethdev.a 00:01:54.240 [206/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.240 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:54.241 [208/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:54.499 [209/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:54.499 [210/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:54.499 [211/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.499 [212/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:54.499 [213/268] Linking static target drivers/librte_bus_vdev.a 00:01:54.499 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:54.499 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.499 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.499 [217/268] Linking static target drivers/librte_bus_pci.a 00:01:54.500 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.500 [219/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:54.500 [220/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:54.500 [221/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:54.500 [222/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:54.500 [223/268] Linking static target drivers/librte_mempool_ring.a 00:01:54.500 [224/268] Linking static target lib/librte_cryptodev.a 00:01:54.500 [225/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.065 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.632 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.004 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:58.378 [229/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.378 [230/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.378 [231/268] Linking target lib/librte_eal.so.24.1 00:01:58.378 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:58.378 [233/268] Linking target lib/librte_pci.so.24.1 00:01:58.378 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:58.378 [235/268] Linking target lib/librte_dmadev.so.24.1 00:01:58.378 [236/268] Linking target lib/librte_ring.so.24.1 00:01:58.378 [237/268] Linking target lib/librte_timer.so.24.1 00:01:58.378 [238/268] Linking target lib/librte_meter.so.24.1 00:01:58.378 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:58.378 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:58.378 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:58.378 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:58.378 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:58.636 [244/268] Linking target lib/librte_mempool.so.24.1 00:01:58.636 [245/268] Linking target lib/librte_rcu.so.24.1 00:01:58.636 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:58.636 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:58.636 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:58.636 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:58.636 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:58.894 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:58.894 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:58.894 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:01:58.894 [254/268] Linking target lib/librte_compressdev.so.24.1 00:01:58.894 [255/268] Linking target lib/librte_net.so.24.1 00:01:58.894 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:58.894 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:59.153 [258/268] Linking target lib/librte_security.so.24.1 00:01:59.153 [259/268] Linking target lib/librte_hash.so.24.1 00:01:59.153 [260/268] Linking target lib/librte_cmdline.so.24.1 00:01:59.153 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:59.153 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:59.153 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:59.413 [264/268] Linking target lib/librte_power.so.24.1 00:02:02.720 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:02.720 [266/268] Linking static target lib/librte_vhost.a 00:02:03.284 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.541 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:03.541 INFO: autodetecting backend as ninja 00:02:03.541 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 32 00:02:04.474 CC lib/ut/ut.o 00:02:04.474 CC lib/ut_mock/mock.o 00:02:04.474 CC lib/log/log.o 00:02:04.474 CC lib/log/log_flags.o 00:02:04.474 CC lib/log/log_deprecated.o 00:02:04.732 LIB libspdk_log.a 00:02:04.732 LIB libspdk_ut.a 00:02:04.732 LIB libspdk_ut_mock.a 00:02:04.732 SO libspdk_ut.so.2.0 00:02:04.732 SO libspdk_ut_mock.so.6.0 00:02:04.732 SO libspdk_log.so.7.0 00:02:04.732 SYMLINK libspdk_ut.so 00:02:04.732 SYMLINK libspdk_ut_mock.so 00:02:04.732 SYMLINK libspdk_log.so 00:02:04.993 CC lib/dma/dma.o 00:02:04.993 CC lib/ioat/ioat.o 00:02:04.993 CXX lib/trace_parser/trace.o 00:02:04.993 CC lib/util/base64.o 00:02:04.993 CC lib/util/bit_array.o 00:02:04.993 CC lib/util/cpuset.o 00:02:04.993 CC lib/util/crc16.o 00:02:04.993 CC lib/util/crc32.o 00:02:04.993 CC lib/util/crc32c.o 00:02:04.993 CC lib/util/crc32_ieee.o 00:02:04.993 CC lib/util/crc64.o 00:02:04.993 CC lib/util/dif.o 00:02:04.993 CC lib/util/fd.o 00:02:04.993 CC lib/util/file.o 00:02:04.993 CC lib/util/hexlify.o 00:02:04.993 CC lib/util/iov.o 00:02:04.993 CC lib/util/pipe.o 00:02:04.993 CC lib/util/math.o 00:02:04.993 CC lib/util/strerror_tls.o 00:02:04.993 CC lib/util/uuid.o 00:02:04.993 CC lib/util/string.o 00:02:04.993 CC lib/util/fd_group.o 00:02:04.993 CC lib/util/xor.o 00:02:04.993 CC lib/util/zipf.o 00:02:05.251 CC lib/vfio_user/host/vfio_user_pci.o 00:02:05.251 CC lib/vfio_user/host/vfio_user.o 00:02:05.251 LIB libspdk_dma.a 00:02:05.251 SO libspdk_dma.so.4.0 00:02:05.251 SYMLINK libspdk_dma.so 00:02:05.509 LIB libspdk_vfio_user.a 00:02:05.509 SO libspdk_vfio_user.so.5.0 00:02:05.509 LIB libspdk_ioat.a 00:02:05.509 SO libspdk_ioat.so.7.0 00:02:05.509 SYMLINK libspdk_vfio_user.so 00:02:05.509 SYMLINK libspdk_ioat.so 00:02:05.509 LIB libspdk_util.a 00:02:05.767 SO libspdk_util.so.9.1 00:02:05.767 SYMLINK libspdk_util.so 00:02:06.025 CC lib/rdma_provider/common.o 00:02:06.025 CC lib/rdma_utils/rdma_utils.o 00:02:06.025 CC lib/vmd/vmd.o 00:02:06.025 CC lib/env_dpdk/env.o 00:02:06.025 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:06.025 CC lib/idxd/idxd.o 00:02:06.025 CC lib/env_dpdk/memory.o 00:02:06.025 CC lib/json/json_parse.o 00:02:06.025 CC lib/idxd/idxd_user.o 00:02:06.025 CC lib/conf/conf.o 00:02:06.025 CC lib/vmd/led.o 00:02:06.025 CC lib/env_dpdk/pci.o 00:02:06.025 CC lib/idxd/idxd_kernel.o 00:02:06.025 CC lib/json/json_util.o 00:02:06.025 CC lib/env_dpdk/threads.o 00:02:06.025 CC lib/env_dpdk/init.o 00:02:06.025 CC lib/env_dpdk/pci_ioat.o 00:02:06.025 CC lib/json/json_write.o 00:02:06.025 CC lib/env_dpdk/pci_virtio.o 00:02:06.025 CC lib/env_dpdk/pci_vmd.o 00:02:06.025 CC lib/env_dpdk/pci_idxd.o 00:02:06.025 CC lib/env_dpdk/pci_event.o 00:02:06.025 CC lib/env_dpdk/sigbus_handler.o 00:02:06.025 CC lib/env_dpdk/pci_dpdk.o 00:02:06.025 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:06.025 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:06.025 LIB libspdk_trace_parser.a 00:02:06.025 SO libspdk_trace_parser.so.5.0 00:02:06.282 SYMLINK libspdk_trace_parser.so 00:02:06.282 LIB libspdk_rdma_utils.a 00:02:06.282 LIB libspdk_rdma_provider.a 00:02:06.282 SO libspdk_rdma_utils.so.1.0 00:02:06.282 SO libspdk_rdma_provider.so.6.0 00:02:06.282 LIB libspdk_conf.a 00:02:06.282 SO libspdk_conf.so.6.0 00:02:06.282 SYMLINK libspdk_rdma_utils.so 00:02:06.282 LIB libspdk_json.a 00:02:06.282 SYMLINK libspdk_rdma_provider.so 00:02:06.540 SYMLINK libspdk_conf.so 00:02:06.540 SO libspdk_json.so.6.0 00:02:06.540 SYMLINK libspdk_json.so 00:02:06.540 CC lib/jsonrpc/jsonrpc_server.o 00:02:06.540 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:06.540 CC lib/jsonrpc/jsonrpc_client.o 00:02:06.540 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:06.798 LIB libspdk_idxd.a 00:02:06.798 SO libspdk_idxd.so.12.0 00:02:06.798 LIB libspdk_vmd.a 00:02:06.798 SYMLINK libspdk_idxd.so 00:02:06.798 SO libspdk_vmd.so.6.0 00:02:06.798 SYMLINK libspdk_vmd.so 00:02:06.798 LIB libspdk_jsonrpc.a 00:02:06.798 SO libspdk_jsonrpc.so.6.0 00:02:07.056 SYMLINK libspdk_jsonrpc.so 00:02:07.056 CC lib/rpc/rpc.o 00:02:07.313 LIB libspdk_rpc.a 00:02:07.313 SO libspdk_rpc.so.6.0 00:02:07.571 SYMLINK libspdk_rpc.so 00:02:07.571 CC lib/keyring/keyring.o 00:02:07.571 CC lib/keyring/keyring_rpc.o 00:02:07.571 CC lib/trace/trace.o 00:02:07.571 CC lib/trace/trace_flags.o 00:02:07.571 CC lib/notify/notify.o 00:02:07.571 CC lib/trace/trace_rpc.o 00:02:07.571 CC lib/notify/notify_rpc.o 00:02:07.828 LIB libspdk_notify.a 00:02:07.828 SO libspdk_notify.so.6.0 00:02:07.828 LIB libspdk_keyring.a 00:02:07.828 SO libspdk_keyring.so.1.0 00:02:07.828 SYMLINK libspdk_notify.so 00:02:07.828 LIB libspdk_trace.a 00:02:07.828 SYMLINK libspdk_keyring.so 00:02:07.828 SO libspdk_trace.so.10.0 00:02:08.087 LIB libspdk_env_dpdk.a 00:02:08.087 SYMLINK libspdk_trace.so 00:02:08.087 SO libspdk_env_dpdk.so.14.1 00:02:08.087 CC lib/thread/thread.o 00:02:08.087 CC lib/thread/iobuf.o 00:02:08.087 CC lib/sock/sock.o 00:02:08.087 CC lib/sock/sock_rpc.o 00:02:08.087 SYMLINK libspdk_env_dpdk.so 00:02:08.653 LIB libspdk_sock.a 00:02:08.653 SO libspdk_sock.so.10.0 00:02:08.653 SYMLINK libspdk_sock.so 00:02:08.911 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:08.911 CC lib/nvme/nvme_ctrlr.o 00:02:08.911 CC lib/nvme/nvme_fabric.o 00:02:08.911 CC lib/nvme/nvme_ns_cmd.o 00:02:08.911 CC lib/nvme/nvme_ns.o 00:02:08.911 CC lib/nvme/nvme_pcie_common.o 00:02:08.911 CC lib/nvme/nvme_pcie.o 00:02:08.911 CC lib/nvme/nvme_qpair.o 00:02:08.911 CC lib/nvme/nvme.o 00:02:08.911 CC lib/nvme/nvme_quirks.o 00:02:08.911 CC lib/nvme/nvme_transport.o 00:02:08.911 CC lib/nvme/nvme_discovery.o 00:02:08.911 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:08.911 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:08.911 CC lib/nvme/nvme_tcp.o 00:02:08.911 CC lib/nvme/nvme_opal.o 00:02:08.911 CC lib/nvme/nvme_io_msg.o 00:02:08.911 CC lib/nvme/nvme_poll_group.o 00:02:08.911 CC lib/nvme/nvme_zns.o 00:02:08.911 CC lib/nvme/nvme_stubs.o 00:02:08.911 CC lib/nvme/nvme_auth.o 00:02:08.911 CC lib/nvme/nvme_cuse.o 00:02:08.911 CC lib/nvme/nvme_vfio_user.o 00:02:08.911 CC lib/nvme/nvme_rdma.o 00:02:10.284 LIB libspdk_thread.a 00:02:10.284 SO libspdk_thread.so.10.1 00:02:10.284 SYMLINK libspdk_thread.so 00:02:10.284 CC lib/init/json_config.o 00:02:10.284 CC lib/accel/accel.o 00:02:10.284 CC lib/init/subsystem.o 00:02:10.284 CC lib/virtio/virtio.o 00:02:10.284 CC lib/accel/accel_rpc.o 00:02:10.284 CC lib/accel/accel_sw.o 00:02:10.284 CC lib/virtio/virtio_vhost_user.o 00:02:10.284 CC lib/init/subsystem_rpc.o 00:02:10.284 CC lib/init/rpc.o 00:02:10.284 CC lib/virtio/virtio_vfio_user.o 00:02:10.284 CC lib/vfu_tgt/tgt_endpoint.o 00:02:10.284 CC lib/virtio/virtio_pci.o 00:02:10.284 CC lib/blob/blobstore.o 00:02:10.284 CC lib/vfu_tgt/tgt_rpc.o 00:02:10.284 CC lib/blob/request.o 00:02:10.284 CC lib/blob/blob_bs_dev.o 00:02:10.284 CC lib/blob/zeroes.o 00:02:10.847 LIB libspdk_init.a 00:02:10.847 LIB libspdk_virtio.a 00:02:10.847 SO libspdk_init.so.5.0 00:02:10.847 SO libspdk_virtio.so.7.0 00:02:10.847 SYMLINK libspdk_init.so 00:02:10.847 LIB libspdk_vfu_tgt.a 00:02:10.847 SYMLINK libspdk_virtio.so 00:02:10.847 SO libspdk_vfu_tgt.so.3.0 00:02:10.847 SYMLINK libspdk_vfu_tgt.so 00:02:11.103 CC lib/event/app.o 00:02:11.103 CC lib/event/reactor.o 00:02:11.103 CC lib/event/log_rpc.o 00:02:11.103 CC lib/event/app_rpc.o 00:02:11.103 CC lib/event/scheduler_static.o 00:02:11.361 LIB libspdk_event.a 00:02:11.361 SO libspdk_event.so.14.0 00:02:11.361 LIB libspdk_accel.a 00:02:11.361 SO libspdk_accel.so.15.1 00:02:11.618 SYMLINK libspdk_event.so 00:02:11.618 SYMLINK libspdk_accel.so 00:02:11.618 LIB libspdk_nvme.a 00:02:11.618 CC lib/bdev/bdev.o 00:02:11.618 CC lib/bdev/bdev_rpc.o 00:02:11.618 CC lib/bdev/bdev_zone.o 00:02:11.618 CC lib/bdev/part.o 00:02:11.618 CC lib/bdev/scsi_nvme.o 00:02:11.876 SO libspdk_nvme.so.13.1 00:02:12.136 SYMLINK libspdk_nvme.so 00:02:13.516 LIB libspdk_blob.a 00:02:13.516 SO libspdk_blob.so.11.0 00:02:13.516 SYMLINK libspdk_blob.so 00:02:13.845 CC lib/lvol/lvol.o 00:02:13.845 CC lib/blobfs/blobfs.o 00:02:13.845 CC lib/blobfs/tree.o 00:02:14.118 LIB libspdk_bdev.a 00:02:14.118 SO libspdk_bdev.so.15.1 00:02:14.411 SYMLINK libspdk_bdev.so 00:02:14.411 CC lib/scsi/dev.o 00:02:14.411 CC lib/nvmf/ctrlr.o 00:02:14.411 CC lib/scsi/lun.o 00:02:14.411 CC lib/ublk/ublk.o 00:02:14.411 CC lib/scsi/port.o 00:02:14.411 CC lib/nvmf/ctrlr_discovery.o 00:02:14.411 CC lib/ublk/ublk_rpc.o 00:02:14.411 CC lib/scsi/scsi.o 00:02:14.411 CC lib/nvmf/ctrlr_bdev.o 00:02:14.411 CC lib/scsi/scsi_bdev.o 00:02:14.411 CC lib/nbd/nbd.o 00:02:14.411 CC lib/scsi/scsi_pr.o 00:02:14.411 CC lib/nvmf/nvmf.o 00:02:14.411 CC lib/nvmf/subsystem.o 00:02:14.411 CC lib/nbd/nbd_rpc.o 00:02:14.411 CC lib/scsi/scsi_rpc.o 00:02:14.411 CC lib/nvmf/nvmf_rpc.o 00:02:14.411 CC lib/scsi/task.o 00:02:14.411 CC lib/nvmf/transport.o 00:02:14.411 CC lib/ftl/ftl_core.o 00:02:14.411 CC lib/nvmf/tcp.o 00:02:14.411 CC lib/ftl/ftl_init.o 00:02:14.411 CC lib/nvmf/stubs.o 00:02:14.411 CC lib/ftl/ftl_layout.o 00:02:14.411 CC lib/ftl/ftl_debug.o 00:02:14.411 CC lib/nvmf/mdns_server.o 00:02:14.411 CC lib/ftl/ftl_io.o 00:02:14.411 CC lib/nvmf/vfio_user.o 00:02:14.411 CC lib/ftl/ftl_sb.o 00:02:14.411 CC lib/nvmf/rdma.o 00:02:14.687 CC lib/ftl/ftl_l2p.o 00:02:14.998 CC lib/ftl/ftl_l2p_flat.o 00:02:14.998 CC lib/nvmf/auth.o 00:02:14.998 CC lib/ftl/ftl_nv_cache.o 00:02:14.998 CC lib/ftl/ftl_band.o 00:02:14.998 CC lib/ftl/ftl_band_ops.o 00:02:14.998 LIB libspdk_blobfs.a 00:02:14.998 CC lib/ftl/ftl_writer.o 00:02:14.998 CC lib/ftl/ftl_rq.o 00:02:14.998 CC lib/ftl/ftl_reloc.o 00:02:14.998 SO libspdk_blobfs.so.10.0 00:02:14.998 CC lib/ftl/ftl_l2p_cache.o 00:02:14.998 CC lib/ftl/ftl_p2l.o 00:02:14.998 LIB libspdk_lvol.a 00:02:14.998 SYMLINK libspdk_blobfs.so 00:02:14.998 CC lib/ftl/mngt/ftl_mngt.o 00:02:14.998 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:15.270 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:15.270 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:15.270 SO libspdk_lvol.so.10.0 00:02:15.270 SYMLINK libspdk_lvol.so 00:02:15.270 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:15.270 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:15.270 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:15.270 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:15.270 LIB libspdk_nbd.a 00:02:15.270 SO libspdk_nbd.so.7.0 00:02:15.270 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:15.270 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:15.545 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:15.545 LIB libspdk_scsi.a 00:02:15.545 SYMLINK libspdk_nbd.so 00:02:15.545 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:15.545 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:15.545 CC lib/ftl/utils/ftl_conf.o 00:02:15.545 SO libspdk_scsi.so.9.0 00:02:15.545 CC lib/ftl/utils/ftl_md.o 00:02:15.545 CC lib/ftl/utils/ftl_mempool.o 00:02:15.545 CC lib/ftl/utils/ftl_bitmap.o 00:02:15.545 CC lib/ftl/utils/ftl_property.o 00:02:15.545 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:15.545 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:15.545 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:15.545 SYMLINK libspdk_scsi.so 00:02:15.545 LIB libspdk_ublk.a 00:02:15.545 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:15.822 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:15.822 SO libspdk_ublk.so.3.0 00:02:15.822 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:15.822 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:15.822 SYMLINK libspdk_ublk.so 00:02:15.822 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:15.822 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:15.822 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:15.822 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:15.822 CC lib/ftl/base/ftl_base_dev.o 00:02:15.822 CC lib/ftl/base/ftl_base_bdev.o 00:02:15.822 CC lib/ftl/ftl_trace.o 00:02:16.118 CC lib/iscsi/conn.o 00:02:16.118 CC lib/iscsi/init_grp.o 00:02:16.118 CC lib/iscsi/iscsi.o 00:02:16.118 CC lib/iscsi/md5.o 00:02:16.118 CC lib/iscsi/param.o 00:02:16.118 CC lib/iscsi/portal_grp.o 00:02:16.118 CC lib/iscsi/tgt_node.o 00:02:16.118 CC lib/iscsi/iscsi_subsystem.o 00:02:16.118 CC lib/vhost/vhost.o 00:02:16.118 CC lib/vhost/vhost_rpc.o 00:02:16.118 CC lib/iscsi/iscsi_rpc.o 00:02:16.118 CC lib/iscsi/task.o 00:02:16.118 CC lib/vhost/vhost_scsi.o 00:02:16.118 CC lib/vhost/vhost_blk.o 00:02:16.118 CC lib/vhost/rte_vhost_user.o 00:02:16.742 LIB libspdk_ftl.a 00:02:16.742 SO libspdk_ftl.so.9.0 00:02:17.014 SYMLINK libspdk_ftl.so 00:02:17.636 LIB libspdk_vhost.a 00:02:17.636 LIB libspdk_nvmf.a 00:02:17.636 SO libspdk_vhost.so.8.0 00:02:17.636 LIB libspdk_iscsi.a 00:02:17.636 SO libspdk_iscsi.so.8.0 00:02:17.636 SO libspdk_nvmf.so.19.0 00:02:17.636 SYMLINK libspdk_vhost.so 00:02:17.636 SYMLINK libspdk_iscsi.so 00:02:17.915 SYMLINK libspdk_nvmf.so 00:02:18.186 CC module/env_dpdk/env_dpdk_rpc.o 00:02:18.186 CC module/vfu_device/vfu_virtio.o 00:02:18.186 CC module/vfu_device/vfu_virtio_blk.o 00:02:18.186 CC module/vfu_device/vfu_virtio_scsi.o 00:02:18.186 CC module/vfu_device/vfu_virtio_rpc.o 00:02:18.186 CC module/keyring/linux/keyring.o 00:02:18.186 CC module/keyring/linux/keyring_rpc.o 00:02:18.186 CC module/scheduler/gscheduler/gscheduler.o 00:02:18.186 CC module/blob/bdev/blob_bdev.o 00:02:18.186 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:18.186 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:18.186 CC module/accel/error/accel_error.o 00:02:18.186 CC module/accel/ioat/accel_ioat_rpc.o 00:02:18.186 CC module/keyring/file/keyring.o 00:02:18.186 CC module/accel/ioat/accel_ioat.o 00:02:18.186 CC module/accel/error/accel_error_rpc.o 00:02:18.186 CC module/keyring/file/keyring_rpc.o 00:02:18.186 CC module/accel/dsa/accel_dsa.o 00:02:18.186 CC module/accel/dsa/accel_dsa_rpc.o 00:02:18.186 CC module/accel/iaa/accel_iaa.o 00:02:18.186 CC module/accel/iaa/accel_iaa_rpc.o 00:02:18.186 CC module/sock/posix/posix.o 00:02:18.186 LIB libspdk_env_dpdk_rpc.a 00:02:18.186 SO libspdk_env_dpdk_rpc.so.6.0 00:02:18.465 LIB libspdk_scheduler_gscheduler.a 00:02:18.465 SYMLINK libspdk_env_dpdk_rpc.so 00:02:18.465 LIB libspdk_keyring_file.a 00:02:18.465 SO libspdk_scheduler_gscheduler.so.4.0 00:02:18.465 LIB libspdk_keyring_linux.a 00:02:18.465 SO libspdk_keyring_file.so.1.0 00:02:18.465 LIB libspdk_accel_error.a 00:02:18.465 LIB libspdk_accel_ioat.a 00:02:18.465 SO libspdk_keyring_linux.so.1.0 00:02:18.465 SYMLINK libspdk_scheduler_gscheduler.so 00:02:18.465 SO libspdk_accel_error.so.2.0 00:02:18.465 SO libspdk_accel_ioat.so.6.0 00:02:18.465 LIB libspdk_scheduler_dpdk_governor.a 00:02:18.465 SYMLINK libspdk_keyring_file.so 00:02:18.465 LIB libspdk_scheduler_dynamic.a 00:02:18.465 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:18.465 SYMLINK libspdk_keyring_linux.so 00:02:18.465 SYMLINK libspdk_accel_error.so 00:02:18.465 SYMLINK libspdk_accel_ioat.so 00:02:18.465 SO libspdk_scheduler_dynamic.so.4.0 00:02:18.465 LIB libspdk_accel_iaa.a 00:02:18.465 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:18.465 LIB libspdk_blob_bdev.a 00:02:18.465 SO libspdk_accel_iaa.so.3.0 00:02:18.465 SYMLINK libspdk_scheduler_dynamic.so 00:02:18.465 SO libspdk_blob_bdev.so.11.0 00:02:18.465 LIB libspdk_accel_dsa.a 00:02:18.465 SYMLINK libspdk_accel_iaa.so 00:02:18.465 SO libspdk_accel_dsa.so.5.0 00:02:18.465 SYMLINK libspdk_blob_bdev.so 00:02:18.767 SYMLINK libspdk_accel_dsa.so 00:02:18.767 CC module/bdev/error/vbdev_error.o 00:02:18.767 CC module/bdev/error/vbdev_error_rpc.o 00:02:18.767 CC module/bdev/gpt/vbdev_gpt.o 00:02:18.767 CC module/bdev/gpt/gpt.o 00:02:18.767 CC module/bdev/lvol/vbdev_lvol.o 00:02:18.767 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:18.767 CC module/bdev/ftl/bdev_ftl.o 00:02:18.767 CC module/blobfs/bdev/blobfs_bdev.o 00:02:18.767 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:18.767 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:18.767 CC module/bdev/passthru/vbdev_passthru.o 00:02:18.767 CC module/bdev/null/bdev_null.o 00:02:18.767 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:18.767 CC module/bdev/nvme/bdev_nvme.o 00:02:18.767 CC module/bdev/split/vbdev_split.o 00:02:18.767 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:18.767 CC module/bdev/split/vbdev_split_rpc.o 00:02:18.767 CC module/bdev/nvme/nvme_rpc.o 00:02:18.767 CC module/bdev/null/bdev_null_rpc.o 00:02:18.767 CC module/bdev/malloc/bdev_malloc.o 00:02:18.767 CC module/bdev/nvme/bdev_mdns_client.o 00:02:18.767 CC module/bdev/aio/bdev_aio.o 00:02:18.767 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:18.767 CC module/bdev/aio/bdev_aio_rpc.o 00:02:18.767 CC module/bdev/nvme/vbdev_opal.o 00:02:18.767 LIB libspdk_vfu_device.a 00:02:18.767 CC module/bdev/raid/bdev_raid.o 00:02:18.767 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:18.767 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:18.767 CC module/bdev/delay/vbdev_delay.o 00:02:18.767 CC module/bdev/iscsi/bdev_iscsi.o 00:02:19.035 SO libspdk_vfu_device.so.3.0 00:02:19.035 SYMLINK libspdk_vfu_device.so 00:02:19.035 CC module/bdev/raid/bdev_raid_rpc.o 00:02:19.298 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:19.298 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:19.298 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:19.298 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:19.298 LIB libspdk_blobfs_bdev.a 00:02:19.298 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:19.298 CC module/bdev/raid/bdev_raid_sb.o 00:02:19.298 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:19.298 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:19.298 CC module/bdev/raid/raid0.o 00:02:19.298 SO libspdk_blobfs_bdev.so.6.0 00:02:19.298 LIB libspdk_sock_posix.a 00:02:19.298 CC module/bdev/raid/raid1.o 00:02:19.298 LIB libspdk_bdev_split.a 00:02:19.298 CC module/bdev/raid/concat.o 00:02:19.298 SO libspdk_sock_posix.so.6.0 00:02:19.298 SO libspdk_bdev_split.so.6.0 00:02:19.298 LIB libspdk_bdev_error.a 00:02:19.298 LIB libspdk_bdev_null.a 00:02:19.298 SYMLINK libspdk_blobfs_bdev.so 00:02:19.298 SO libspdk_bdev_error.so.6.0 00:02:19.556 LIB libspdk_bdev_gpt.a 00:02:19.556 SO libspdk_bdev_null.so.6.0 00:02:19.556 SYMLINK libspdk_bdev_split.so 00:02:19.556 LIB libspdk_bdev_passthru.a 00:02:19.556 LIB libspdk_bdev_ftl.a 00:02:19.556 SO libspdk_bdev_gpt.so.6.0 00:02:19.556 LIB libspdk_bdev_aio.a 00:02:19.556 SYMLINK libspdk_sock_posix.so 00:02:19.556 SO libspdk_bdev_passthru.so.6.0 00:02:19.556 SO libspdk_bdev_ftl.so.6.0 00:02:19.556 LIB libspdk_bdev_zone_block.a 00:02:19.556 SO libspdk_bdev_aio.so.6.0 00:02:19.556 SYMLINK libspdk_bdev_error.so 00:02:19.556 SYMLINK libspdk_bdev_null.so 00:02:19.556 SO libspdk_bdev_zone_block.so.6.0 00:02:19.556 LIB libspdk_bdev_iscsi.a 00:02:19.556 SYMLINK libspdk_bdev_gpt.so 00:02:19.556 LIB libspdk_bdev_delay.a 00:02:19.556 SO libspdk_bdev_iscsi.so.6.0 00:02:19.556 SYMLINK libspdk_bdev_ftl.so 00:02:19.556 SYMLINK libspdk_bdev_aio.so 00:02:19.556 SO libspdk_bdev_delay.so.6.0 00:02:19.556 SYMLINK libspdk_bdev_passthru.so 00:02:19.556 LIB libspdk_bdev_malloc.a 00:02:19.556 SYMLINK libspdk_bdev_zone_block.so 00:02:19.556 SO libspdk_bdev_malloc.so.6.0 00:02:19.556 SYMLINK libspdk_bdev_iscsi.so 00:02:19.556 SYMLINK libspdk_bdev_delay.so 00:02:19.813 SYMLINK libspdk_bdev_malloc.so 00:02:19.813 LIB libspdk_bdev_lvol.a 00:02:19.813 LIB libspdk_bdev_virtio.a 00:02:19.813 SO libspdk_bdev_lvol.so.6.0 00:02:19.813 SO libspdk_bdev_virtio.so.6.0 00:02:19.813 SYMLINK libspdk_bdev_lvol.so 00:02:19.813 SYMLINK libspdk_bdev_virtio.so 00:02:20.070 LIB libspdk_bdev_raid.a 00:02:20.070 SO libspdk_bdev_raid.so.6.0 00:02:20.327 SYMLINK libspdk_bdev_raid.so 00:02:21.262 LIB libspdk_bdev_nvme.a 00:02:21.262 SO libspdk_bdev_nvme.so.7.0 00:02:21.521 SYMLINK libspdk_bdev_nvme.so 00:02:21.778 CC module/event/subsystems/vmd/vmd.o 00:02:21.778 CC module/event/subsystems/sock/sock.o 00:02:21.778 CC module/event/subsystems/iobuf/iobuf.o 00:02:21.778 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:21.778 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:21.778 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:21.778 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:21.778 CC module/event/subsystems/keyring/keyring.o 00:02:21.778 CC module/event/subsystems/scheduler/scheduler.o 00:02:22.036 LIB libspdk_event_keyring.a 00:02:22.036 LIB libspdk_event_vhost_blk.a 00:02:22.036 LIB libspdk_event_scheduler.a 00:02:22.036 LIB libspdk_event_vfu_tgt.a 00:02:22.036 LIB libspdk_event_vmd.a 00:02:22.036 LIB libspdk_event_sock.a 00:02:22.036 SO libspdk_event_keyring.so.1.0 00:02:22.036 LIB libspdk_event_iobuf.a 00:02:22.036 SO libspdk_event_vhost_blk.so.3.0 00:02:22.036 SO libspdk_event_scheduler.so.4.0 00:02:22.036 SO libspdk_event_vfu_tgt.so.3.0 00:02:22.036 SO libspdk_event_sock.so.5.0 00:02:22.036 SO libspdk_event_vmd.so.6.0 00:02:22.036 SO libspdk_event_iobuf.so.3.0 00:02:22.036 SYMLINK libspdk_event_keyring.so 00:02:22.036 SYMLINK libspdk_event_vhost_blk.so 00:02:22.036 SYMLINK libspdk_event_vfu_tgt.so 00:02:22.036 SYMLINK libspdk_event_scheduler.so 00:02:22.036 SYMLINK libspdk_event_sock.so 00:02:22.428 SYMLINK libspdk_event_vmd.so 00:02:22.428 SYMLINK libspdk_event_iobuf.so 00:02:22.428 CC module/event/subsystems/accel/accel.o 00:02:22.428 LIB libspdk_event_accel.a 00:02:22.688 SO libspdk_event_accel.so.6.0 00:02:22.688 SYMLINK libspdk_event_accel.so 00:02:22.944 CC module/event/subsystems/bdev/bdev.o 00:02:22.944 LIB libspdk_event_bdev.a 00:02:22.945 SO libspdk_event_bdev.so.6.0 00:02:23.202 SYMLINK libspdk_event_bdev.so 00:02:23.202 CC module/event/subsystems/ublk/ublk.o 00:02:23.202 CC module/event/subsystems/nbd/nbd.o 00:02:23.202 CC module/event/subsystems/scsi/scsi.o 00:02:23.202 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:23.202 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:23.460 LIB libspdk_event_ublk.a 00:02:23.460 LIB libspdk_event_nbd.a 00:02:23.460 LIB libspdk_event_scsi.a 00:02:23.460 SO libspdk_event_ublk.so.3.0 00:02:23.460 SO libspdk_event_nbd.so.6.0 00:02:23.460 SO libspdk_event_scsi.so.6.0 00:02:23.460 SYMLINK libspdk_event_ublk.so 00:02:23.460 SYMLINK libspdk_event_nbd.so 00:02:23.460 SYMLINK libspdk_event_scsi.so 00:02:23.460 LIB libspdk_event_nvmf.a 00:02:23.717 SO libspdk_event_nvmf.so.6.0 00:02:23.717 SYMLINK libspdk_event_nvmf.so 00:02:23.717 CC module/event/subsystems/iscsi/iscsi.o 00:02:23.717 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:23.976 LIB libspdk_event_vhost_scsi.a 00:02:23.976 LIB libspdk_event_iscsi.a 00:02:23.976 SO libspdk_event_vhost_scsi.so.3.0 00:02:23.976 SO libspdk_event_iscsi.so.6.0 00:02:23.976 SYMLINK libspdk_event_vhost_scsi.so 00:02:23.976 SYMLINK libspdk_event_iscsi.so 00:02:24.235 SO libspdk.so.6.0 00:02:24.235 SYMLINK libspdk.so 00:02:24.235 CXX app/trace/trace.o 00:02:24.235 CC app/trace_record/trace_record.o 00:02:24.497 TEST_HEADER include/spdk/accel.h 00:02:24.497 TEST_HEADER include/spdk/accel_module.h 00:02:24.497 TEST_HEADER include/spdk/assert.h 00:02:24.497 TEST_HEADER include/spdk/barrier.h 00:02:24.497 TEST_HEADER include/spdk/base64.h 00:02:24.497 TEST_HEADER include/spdk/bdev.h 00:02:24.497 TEST_HEADER include/spdk/bdev_module.h 00:02:24.497 TEST_HEADER include/spdk/bdev_zone.h 00:02:24.497 CC app/spdk_lspci/spdk_lspci.o 00:02:24.497 TEST_HEADER include/spdk/bit_array.h 00:02:24.497 TEST_HEADER include/spdk/bit_pool.h 00:02:24.497 CC app/spdk_top/spdk_top.o 00:02:24.497 TEST_HEADER include/spdk/blob_bdev.h 00:02:24.497 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:24.497 TEST_HEADER include/spdk/blobfs.h 00:02:24.497 CC app/spdk_nvme_perf/perf.o 00:02:24.497 CC app/spdk_nvme_discover/discovery_aer.o 00:02:24.497 TEST_HEADER include/spdk/blob.h 00:02:24.497 CC test/rpc_client/rpc_client_test.o 00:02:24.497 TEST_HEADER include/spdk/conf.h 00:02:24.497 CC app/spdk_nvme_identify/identify.o 00:02:24.497 TEST_HEADER include/spdk/config.h 00:02:24.497 TEST_HEADER include/spdk/crc16.h 00:02:24.497 TEST_HEADER include/spdk/crc32.h 00:02:24.497 TEST_HEADER include/spdk/cpuset.h 00:02:24.497 TEST_HEADER include/spdk/crc64.h 00:02:24.497 TEST_HEADER include/spdk/dif.h 00:02:24.497 TEST_HEADER include/spdk/dma.h 00:02:24.497 TEST_HEADER include/spdk/endian.h 00:02:24.497 TEST_HEADER include/spdk/env_dpdk.h 00:02:24.497 TEST_HEADER include/spdk/env.h 00:02:24.497 TEST_HEADER include/spdk/event.h 00:02:24.497 TEST_HEADER include/spdk/fd_group.h 00:02:24.497 TEST_HEADER include/spdk/fd.h 00:02:24.497 TEST_HEADER include/spdk/file.h 00:02:24.497 TEST_HEADER include/spdk/ftl.h 00:02:24.497 TEST_HEADER include/spdk/gpt_spec.h 00:02:24.497 TEST_HEADER include/spdk/hexlify.h 00:02:24.497 TEST_HEADER include/spdk/histogram_data.h 00:02:24.497 TEST_HEADER include/spdk/idxd.h 00:02:24.497 TEST_HEADER include/spdk/idxd_spec.h 00:02:24.497 TEST_HEADER include/spdk/init.h 00:02:24.497 TEST_HEADER include/spdk/ioat.h 00:02:24.497 TEST_HEADER include/spdk/ioat_spec.h 00:02:24.497 TEST_HEADER include/spdk/iscsi_spec.h 00:02:24.497 TEST_HEADER include/spdk/json.h 00:02:24.497 TEST_HEADER include/spdk/jsonrpc.h 00:02:24.497 TEST_HEADER include/spdk/keyring.h 00:02:24.497 TEST_HEADER include/spdk/keyring_module.h 00:02:24.497 TEST_HEADER include/spdk/likely.h 00:02:24.497 TEST_HEADER include/spdk/log.h 00:02:24.497 TEST_HEADER include/spdk/lvol.h 00:02:24.497 TEST_HEADER include/spdk/memory.h 00:02:24.497 TEST_HEADER include/spdk/mmio.h 00:02:24.497 TEST_HEADER include/spdk/nbd.h 00:02:24.497 TEST_HEADER include/spdk/notify.h 00:02:24.497 TEST_HEADER include/spdk/nvme.h 00:02:24.497 TEST_HEADER include/spdk/nvme_intel.h 00:02:24.497 CC app/spdk_dd/spdk_dd.o 00:02:24.497 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:24.497 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:24.497 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:24.497 TEST_HEADER include/spdk/nvme_spec.h 00:02:24.497 TEST_HEADER include/spdk/nvme_zns.h 00:02:24.497 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:24.497 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:24.497 CC app/iscsi_tgt/iscsi_tgt.o 00:02:24.497 TEST_HEADER include/spdk/nvmf.h 00:02:24.497 TEST_HEADER include/spdk/nvmf_spec.h 00:02:24.497 TEST_HEADER include/spdk/nvmf_transport.h 00:02:24.497 TEST_HEADER include/spdk/opal.h 00:02:24.497 CC app/nvmf_tgt/nvmf_main.o 00:02:24.497 TEST_HEADER include/spdk/opal_spec.h 00:02:24.497 TEST_HEADER include/spdk/pci_ids.h 00:02:24.497 TEST_HEADER include/spdk/pipe.h 00:02:24.497 TEST_HEADER include/spdk/queue.h 00:02:24.497 TEST_HEADER include/spdk/reduce.h 00:02:24.497 TEST_HEADER include/spdk/rpc.h 00:02:24.497 TEST_HEADER include/spdk/scheduler.h 00:02:24.497 TEST_HEADER include/spdk/scsi.h 00:02:24.497 TEST_HEADER include/spdk/scsi_spec.h 00:02:24.497 TEST_HEADER include/spdk/sock.h 00:02:24.497 TEST_HEADER include/spdk/stdinc.h 00:02:24.497 TEST_HEADER include/spdk/string.h 00:02:24.497 TEST_HEADER include/spdk/thread.h 00:02:24.497 CC test/app/histogram_perf/histogram_perf.o 00:02:24.497 TEST_HEADER include/spdk/trace.h 00:02:24.497 TEST_HEADER include/spdk/trace_parser.h 00:02:24.497 CC test/thread/poller_perf/poller_perf.o 00:02:24.497 CC examples/ioat/perf/perf.o 00:02:24.497 CC app/spdk_tgt/spdk_tgt.o 00:02:24.497 CC examples/util/zipf/zipf.o 00:02:24.497 TEST_HEADER include/spdk/tree.h 00:02:24.497 TEST_HEADER include/spdk/ublk.h 00:02:24.497 CC test/app/jsoncat/jsoncat.o 00:02:24.497 TEST_HEADER include/spdk/util.h 00:02:24.497 TEST_HEADER include/spdk/uuid.h 00:02:24.497 TEST_HEADER include/spdk/version.h 00:02:24.497 CC test/app/stub/stub.o 00:02:24.497 CC app/fio/nvme/fio_plugin.o 00:02:24.497 CC test/env/vtophys/vtophys.o 00:02:24.497 CC examples/ioat/verify/verify.o 00:02:24.497 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:24.497 CC test/env/memory/memory_ut.o 00:02:24.497 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:24.497 CC test/env/pci/pci_ut.o 00:02:24.497 TEST_HEADER include/spdk/vhost.h 00:02:24.497 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:24.497 TEST_HEADER include/spdk/vmd.h 00:02:24.497 TEST_HEADER include/spdk/xor.h 00:02:24.498 TEST_HEADER include/spdk/zipf.h 00:02:24.498 CXX test/cpp_headers/accel.o 00:02:24.498 CC test/dma/test_dma/test_dma.o 00:02:24.498 CC test/app/bdev_svc/bdev_svc.o 00:02:24.498 CC app/fio/bdev/fio_plugin.o 00:02:24.759 LINK spdk_lspci 00:02:24.759 CC test/env/mem_callbacks/mem_callbacks.o 00:02:24.759 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:24.759 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:24.759 LINK rpc_client_test 00:02:24.759 LINK spdk_nvme_discover 00:02:24.759 LINK histogram_perf 00:02:24.759 LINK poller_perf 00:02:24.759 LINK interrupt_tgt 00:02:24.759 LINK zipf 00:02:24.759 LINK jsoncat 00:02:24.759 LINK vtophys 00:02:24.759 LINK nvmf_tgt 00:02:24.759 LINK spdk_trace_record 00:02:24.759 LINK env_dpdk_post_init 00:02:24.759 LINK stub 00:02:25.020 CXX test/cpp_headers/accel_module.o 00:02:25.020 LINK iscsi_tgt 00:02:25.020 LINK ioat_perf 00:02:25.020 LINK verify 00:02:25.020 LINK bdev_svc 00:02:25.020 LINK spdk_tgt 00:02:25.020 CXX test/cpp_headers/assert.o 00:02:25.020 CXX test/cpp_headers/barrier.o 00:02:25.020 CXX test/cpp_headers/base64.o 00:02:25.020 CXX test/cpp_headers/bdev.o 00:02:25.020 CXX test/cpp_headers/bdev_module.o 00:02:25.020 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:25.283 LINK spdk_dd 00:02:25.283 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:25.283 CXX test/cpp_headers/bdev_zone.o 00:02:25.283 CXX test/cpp_headers/bit_array.o 00:02:25.283 CXX test/cpp_headers/bit_pool.o 00:02:25.283 LINK spdk_trace 00:02:25.283 CXX test/cpp_headers/blob_bdev.o 00:02:25.283 CXX test/cpp_headers/blobfs_bdev.o 00:02:25.283 LINK pci_ut 00:02:25.283 CXX test/cpp_headers/blobfs.o 00:02:25.283 LINK test_dma 00:02:25.283 CXX test/cpp_headers/blob.o 00:02:25.283 CXX test/cpp_headers/conf.o 00:02:25.546 LINK nvme_fuzz 00:02:25.546 CC test/event/event_perf/event_perf.o 00:02:25.546 CC examples/sock/hello_world/hello_sock.o 00:02:25.546 CC test/event/reactor/reactor.o 00:02:25.546 CC examples/thread/thread/thread_ex.o 00:02:25.546 CC examples/vmd/lsvmd/lsvmd.o 00:02:25.546 CC examples/vmd/led/led.o 00:02:25.546 LINK spdk_nvme 00:02:25.546 CXX test/cpp_headers/config.o 00:02:25.546 CXX test/cpp_headers/cpuset.o 00:02:25.546 LINK spdk_bdev 00:02:25.546 CC test/event/reactor_perf/reactor_perf.o 00:02:25.546 CXX test/cpp_headers/crc16.o 00:02:25.546 CXX test/cpp_headers/crc32.o 00:02:25.546 CXX test/cpp_headers/crc64.o 00:02:25.546 CXX test/cpp_headers/dif.o 00:02:25.546 CXX test/cpp_headers/dma.o 00:02:25.546 CXX test/cpp_headers/endian.o 00:02:25.546 CC test/event/app_repeat/app_repeat.o 00:02:25.808 CC examples/idxd/perf/perf.o 00:02:25.808 CXX test/cpp_headers/env_dpdk.o 00:02:25.808 CC test/event/scheduler/scheduler.o 00:02:25.808 LINK event_perf 00:02:25.808 CXX test/cpp_headers/event.o 00:02:25.808 CXX test/cpp_headers/env.o 00:02:25.808 LINK reactor 00:02:25.808 CXX test/cpp_headers/fd_group.o 00:02:25.808 LINK lsvmd 00:02:25.808 CC app/vhost/vhost.o 00:02:25.808 LINK reactor_perf 00:02:25.808 LINK led 00:02:25.808 LINK mem_callbacks 00:02:25.808 LINK spdk_nvme_perf 00:02:25.808 LINK hello_sock 00:02:25.808 CXX test/cpp_headers/fd.o 00:02:25.808 CXX test/cpp_headers/file.o 00:02:25.808 LINK vhost_fuzz 00:02:25.808 CXX test/cpp_headers/ftl.o 00:02:26.075 LINK spdk_top 00:02:26.075 LINK spdk_nvme_identify 00:02:26.075 CXX test/cpp_headers/gpt_spec.o 00:02:26.075 LINK thread 00:02:26.075 LINK app_repeat 00:02:26.075 CXX test/cpp_headers/hexlify.o 00:02:26.075 CXX test/cpp_headers/histogram_data.o 00:02:26.075 CXX test/cpp_headers/idxd.o 00:02:26.075 CC test/accel/dif/dif.o 00:02:26.075 CC test/blobfs/mkfs/mkfs.o 00:02:26.075 CXX test/cpp_headers/idxd_spec.o 00:02:26.075 CXX test/cpp_headers/init.o 00:02:26.075 CXX test/cpp_headers/ioat.o 00:02:26.075 CXX test/cpp_headers/ioat_spec.o 00:02:26.075 CXX test/cpp_headers/iscsi_spec.o 00:02:26.075 CXX test/cpp_headers/json.o 00:02:26.075 LINK scheduler 00:02:26.075 CC test/nvme/aer/aer.o 00:02:26.075 CC test/nvme/reset/reset.o 00:02:26.075 LINK vhost 00:02:26.075 CC test/lvol/esnap/esnap.o 00:02:26.075 CXX test/cpp_headers/jsonrpc.o 00:02:26.336 CC test/nvme/sgl/sgl.o 00:02:26.336 CXX test/cpp_headers/keyring.o 00:02:26.336 CXX test/cpp_headers/keyring_module.o 00:02:26.336 CXX test/cpp_headers/likely.o 00:02:26.336 CC test/nvme/e2edp/nvme_dp.o 00:02:26.336 CC test/nvme/overhead/overhead.o 00:02:26.336 CXX test/cpp_headers/log.o 00:02:26.336 LINK idxd_perf 00:02:26.336 CXX test/cpp_headers/lvol.o 00:02:26.336 CC test/nvme/err_injection/err_injection.o 00:02:26.336 CC test/nvme/startup/startup.o 00:02:26.336 CC test/nvme/reserve/reserve.o 00:02:26.336 LINK mkfs 00:02:26.600 CC examples/nvme/hello_world/hello_world.o 00:02:26.600 CC test/nvme/simple_copy/simple_copy.o 00:02:26.600 CC examples/nvme/reconnect/reconnect.o 00:02:26.600 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:26.600 CC test/nvme/connect_stress/connect_stress.o 00:02:26.600 CXX test/cpp_headers/memory.o 00:02:26.600 CC test/nvme/boot_partition/boot_partition.o 00:02:26.600 CXX test/cpp_headers/mmio.o 00:02:26.600 CXX test/cpp_headers/nbd.o 00:02:26.600 CC examples/nvme/arbitration/arbitration.o 00:02:26.600 LINK memory_ut 00:02:26.600 CC examples/nvme/hotplug/hotplug.o 00:02:26.600 CC examples/accel/perf/accel_perf.o 00:02:26.600 CC test/nvme/compliance/nvme_compliance.o 00:02:26.600 LINK reset 00:02:26.600 CC test/nvme/fused_ordering/fused_ordering.o 00:02:26.600 LINK aer 00:02:26.600 CC test/nvme/fdp/fdp.o 00:02:26.600 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:26.862 LINK sgl 00:02:26.862 LINK nvme_dp 00:02:26.862 LINK err_injection 00:02:26.862 LINK startup 00:02:26.862 CC test/nvme/cuse/cuse.o 00:02:26.862 LINK overhead 00:02:26.862 CC examples/blob/hello_world/hello_blob.o 00:02:26.862 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:26.862 LINK reserve 00:02:26.862 LINK dif 00:02:26.862 CC examples/nvme/abort/abort.o 00:02:26.862 LINK hello_world 00:02:26.862 CXX test/cpp_headers/notify.o 00:02:26.862 LINK boot_partition 00:02:26.862 LINK simple_copy 00:02:26.862 LINK connect_stress 00:02:26.862 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:26.862 CXX test/cpp_headers/nvme.o 00:02:27.122 CXX test/cpp_headers/nvme_intel.o 00:02:27.122 CXX test/cpp_headers/nvme_ocssd.o 00:02:27.122 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:27.122 CXX test/cpp_headers/nvme_spec.o 00:02:27.122 LINK doorbell_aers 00:02:27.122 CXX test/cpp_headers/nvme_zns.o 00:02:27.122 LINK fused_ordering 00:02:27.122 CXX test/cpp_headers/nvmf_cmd.o 00:02:27.122 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:27.122 LINK hotplug 00:02:27.122 CC examples/blob/cli/blobcli.o 00:02:27.122 LINK reconnect 00:02:27.122 CXX test/cpp_headers/nvmf.o 00:02:27.122 CXX test/cpp_headers/nvmf_spec.o 00:02:27.122 LINK cmb_copy 00:02:27.122 CXX test/cpp_headers/nvmf_transport.o 00:02:27.122 LINK arbitration 00:02:27.122 CXX test/cpp_headers/opal.o 00:02:27.122 CXX test/cpp_headers/opal_spec.o 00:02:27.389 CXX test/cpp_headers/pci_ids.o 00:02:27.389 LINK nvme_compliance 00:02:27.389 LINK hello_blob 00:02:27.389 LINK fdp 00:02:27.389 LINK pmr_persistence 00:02:27.389 CXX test/cpp_headers/pipe.o 00:02:27.389 CXX test/cpp_headers/queue.o 00:02:27.389 CXX test/cpp_headers/reduce.o 00:02:27.389 CXX test/cpp_headers/rpc.o 00:02:27.389 CXX test/cpp_headers/scheduler.o 00:02:27.389 CXX test/cpp_headers/scsi.o 00:02:27.389 CXX test/cpp_headers/scsi_spec.o 00:02:27.389 CXX test/cpp_headers/sock.o 00:02:27.389 CXX test/cpp_headers/stdinc.o 00:02:27.389 CXX test/cpp_headers/string.o 00:02:27.389 LINK nvme_manage 00:02:27.389 CXX test/cpp_headers/thread.o 00:02:27.389 CXX test/cpp_headers/trace.o 00:02:27.389 CXX test/cpp_headers/trace_parser.o 00:02:27.389 CXX test/cpp_headers/tree.o 00:02:27.389 CXX test/cpp_headers/ublk.o 00:02:27.647 CXX test/cpp_headers/util.o 00:02:27.647 LINK accel_perf 00:02:27.647 CXX test/cpp_headers/uuid.o 00:02:27.647 CXX test/cpp_headers/version.o 00:02:27.647 LINK abort 00:02:27.647 CXX test/cpp_headers/vfio_user_pci.o 00:02:27.647 CXX test/cpp_headers/vfio_user_spec.o 00:02:27.647 CXX test/cpp_headers/vhost.o 00:02:27.647 CXX test/cpp_headers/vmd.o 00:02:27.647 CXX test/cpp_headers/xor.o 00:02:27.647 CC test/bdev/bdevio/bdevio.o 00:02:27.647 CXX test/cpp_headers/zipf.o 00:02:27.647 LINK iscsi_fuzz 00:02:27.904 LINK blobcli 00:02:28.162 CC examples/bdev/hello_world/hello_bdev.o 00:02:28.162 CC examples/bdev/bdevperf/bdevperf.o 00:02:28.162 LINK bdevio 00:02:28.450 LINK hello_bdev 00:02:28.450 LINK cuse 00:02:28.707 LINK bdevperf 00:02:29.274 CC examples/nvmf/nvmf/nvmf.o 00:02:29.532 LINK nvmf 00:02:32.063 LINK esnap 00:02:32.063 00:02:32.063 real 0m54.305s 00:02:32.063 user 10m38.347s 00:02:32.063 sys 2m16.392s 00:02:32.063 21:24:22 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:32.063 21:24:22 make -- common/autotest_common.sh@10 -- $ set +x 00:02:32.063 ************************************ 00:02:32.063 END TEST make 00:02:32.063 ************************************ 00:02:32.063 21:24:22 -- common/autotest_common.sh@1142 -- $ return 0 00:02:32.063 21:24:22 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:32.063 21:24:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:32.063 21:24:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:32.063 21:24:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.063 21:24:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:32.063 21:24:22 -- pm/common@44 -- $ pid=176008 00:02:32.063 21:24:22 -- pm/common@50 -- $ kill -TERM 176008 00:02:32.063 21:24:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.063 21:24:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:32.063 21:24:22 -- pm/common@44 -- $ pid=176010 00:02:32.063 21:24:22 -- pm/common@50 -- $ kill -TERM 176010 00:02:32.063 21:24:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.063 21:24:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:32.063 21:24:22 -- pm/common@44 -- $ pid=176012 00:02:32.063 21:24:22 -- pm/common@50 -- $ kill -TERM 176012 00:02:32.063 21:24:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.063 21:24:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:32.063 21:24:22 -- pm/common@44 -- $ pid=176041 00:02:32.063 21:24:22 -- pm/common@50 -- $ sudo -E kill -TERM 176041 00:02:32.063 21:24:22 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:32.063 21:24:22 -- nvmf/common.sh@7 -- # uname -s 00:02:32.063 21:24:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:32.063 21:24:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:32.063 21:24:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:32.063 21:24:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:32.063 21:24:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:32.063 21:24:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:32.063 21:24:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:32.063 21:24:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:32.063 21:24:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:32.063 21:24:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:32.063 21:24:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:02:32.063 21:24:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:02:32.063 21:24:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:32.063 21:24:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:32.063 21:24:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:32.063 21:24:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:32.063 21:24:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:32.063 21:24:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:32.063 21:24:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:32.063 21:24:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:32.063 21:24:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.063 21:24:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.063 21:24:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.063 21:24:22 -- paths/export.sh@5 -- # export PATH 00:02:32.064 21:24:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.064 21:24:22 -- nvmf/common.sh@47 -- # : 0 00:02:32.064 21:24:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:32.064 21:24:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:32.064 21:24:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:32.064 21:24:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:32.321 21:24:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:32.321 21:24:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:32.321 21:24:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:32.321 21:24:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:32.321 21:24:22 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:32.321 21:24:22 -- spdk/autotest.sh@32 -- # uname -s 00:02:32.321 21:24:22 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:32.321 21:24:22 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:32.321 21:24:22 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:32.321 21:24:22 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:32.321 21:24:22 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:32.321 21:24:22 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:32.321 21:24:22 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:32.321 21:24:22 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:32.321 21:24:22 -- spdk/autotest.sh@48 -- # udevadm_pid=230840 00:02:32.321 21:24:22 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:32.321 21:24:22 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:32.321 21:24:22 -- pm/common@17 -- # local monitor 00:02:32.321 21:24:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.321 21:24:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.321 21:24:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.321 21:24:22 -- pm/common@21 -- # date +%s 00:02:32.321 21:24:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:32.321 21:24:22 -- pm/common@21 -- # date +%s 00:02:32.321 21:24:22 -- pm/common@25 -- # sleep 1 00:02:32.321 21:24:22 -- pm/common@21 -- # date +%s 00:02:32.321 21:24:22 -- pm/common@21 -- # date +%s 00:02:32.321 21:24:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721071462 00:02:32.321 21:24:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721071462 00:02:32.321 21:24:22 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721071462 00:02:32.321 21:24:22 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721071462 00:02:32.322 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721071462_collect-vmstat.pm.log 00:02:32.322 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721071462_collect-cpu-load.pm.log 00:02:32.322 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721071462_collect-cpu-temp.pm.log 00:02:32.322 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721071462_collect-bmc-pm.bmc.pm.log 00:02:33.259 21:24:23 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:33.259 21:24:23 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:33.259 21:24:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:33.259 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:02:33.259 21:24:23 -- spdk/autotest.sh@59 -- # create_test_list 00:02:33.259 21:24:23 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:33.259 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:02:33.259 21:24:23 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:33.259 21:24:23 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:33.259 21:24:23 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:33.259 21:24:23 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:33.259 21:24:23 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:33.259 21:24:23 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:33.259 21:24:23 -- common/autotest_common.sh@1455 -- # uname 00:02:33.259 21:24:23 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:33.259 21:24:23 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:33.259 21:24:23 -- common/autotest_common.sh@1475 -- # uname 00:02:33.259 21:24:23 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:33.259 21:24:23 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:33.259 21:24:23 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:33.260 21:24:23 -- spdk/autotest.sh@72 -- # hash lcov 00:02:33.260 21:24:23 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:33.260 21:24:23 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:33.260 --rc lcov_branch_coverage=1 00:02:33.260 --rc lcov_function_coverage=1 00:02:33.260 --rc genhtml_branch_coverage=1 00:02:33.260 --rc genhtml_function_coverage=1 00:02:33.260 --rc genhtml_legend=1 00:02:33.260 --rc geninfo_all_blocks=1 00:02:33.260 ' 00:02:33.260 21:24:23 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:33.260 --rc lcov_branch_coverage=1 00:02:33.260 --rc lcov_function_coverage=1 00:02:33.260 --rc genhtml_branch_coverage=1 00:02:33.260 --rc genhtml_function_coverage=1 00:02:33.260 --rc genhtml_legend=1 00:02:33.260 --rc geninfo_all_blocks=1 00:02:33.260 ' 00:02:33.260 21:24:23 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:33.260 --rc lcov_branch_coverage=1 00:02:33.260 --rc lcov_function_coverage=1 00:02:33.260 --rc genhtml_branch_coverage=1 00:02:33.260 --rc genhtml_function_coverage=1 00:02:33.260 --rc genhtml_legend=1 00:02:33.260 --rc geninfo_all_blocks=1 00:02:33.260 --no-external' 00:02:33.260 21:24:23 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:33.260 --rc lcov_branch_coverage=1 00:02:33.260 --rc lcov_function_coverage=1 00:02:33.260 --rc genhtml_branch_coverage=1 00:02:33.260 --rc genhtml_function_coverage=1 00:02:33.260 --rc genhtml_legend=1 00:02:33.260 --rc geninfo_all_blocks=1 00:02:33.260 --no-external' 00:02:33.260 21:24:23 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:33.260 lcov: LCOV version 1.14 00:02:33.260 21:24:24 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:35.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:35.791 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:35.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:35.791 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:35.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:35.791 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:35.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:35.791 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:35.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:35.791 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:35.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:35.791 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:35.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:35.791 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:35.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:35.791 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:35.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:35.791 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:35.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:35.791 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:35.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:35.791 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:35.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:35.791 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:35.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:35.791 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:35.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:35.791 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:35.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:35.791 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:35.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:35.791 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:35.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:35.791 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:35.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:35.791 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:35.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:35.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:35.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:35.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:35.793 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:53.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:53.861 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:08.764 21:24:58 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:08.764 21:24:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:08.764 21:24:58 -- common/autotest_common.sh@10 -- # set +x 00:03:08.764 21:24:58 -- spdk/autotest.sh@91 -- # rm -f 00:03:08.764 21:24:58 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.024 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:03:09.024 0000:00:04.7 (8086 3c27): Already using the ioatdma driver 00:03:09.024 0000:00:04.6 (8086 3c26): Already using the ioatdma driver 00:03:09.024 0000:00:04.5 (8086 3c25): Already using the ioatdma driver 00:03:09.284 0000:00:04.4 (8086 3c24): Already using the ioatdma driver 00:03:09.284 0000:00:04.3 (8086 3c23): Already using the ioatdma driver 00:03:09.284 0000:00:04.2 (8086 3c22): Already using the ioatdma driver 00:03:09.284 0000:00:04.1 (8086 3c21): Already using the ioatdma driver 00:03:09.284 0000:00:04.0 (8086 3c20): Already using the ioatdma driver 00:03:09.284 0000:80:04.7 (8086 3c27): Already using the ioatdma driver 00:03:09.284 0000:80:04.6 (8086 3c26): Already using the ioatdma driver 00:03:09.284 0000:80:04.5 (8086 3c25): Already using the ioatdma driver 00:03:09.284 0000:80:04.4 (8086 3c24): Already using the ioatdma driver 00:03:09.284 0000:80:04.3 (8086 3c23): Already using the ioatdma driver 00:03:09.284 0000:80:04.2 (8086 3c22): Already using the ioatdma driver 00:03:09.284 0000:80:04.1 (8086 3c21): Already using the ioatdma driver 00:03:09.284 0000:80:04.0 (8086 3c20): Already using the ioatdma driver 00:03:09.284 21:25:00 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:09.284 21:25:00 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:09.284 21:25:00 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:09.284 21:25:00 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:09.284 21:25:00 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:09.284 21:25:00 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:09.284 21:25:00 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:09.284 21:25:00 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:09.284 21:25:00 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:09.284 21:25:00 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:09.284 21:25:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:09.284 21:25:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:09.284 21:25:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:09.284 21:25:00 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:09.285 21:25:00 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:09.285 No valid GPT data, bailing 00:03:09.285 21:25:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:09.285 21:25:00 -- scripts/common.sh@391 -- # pt= 00:03:09.285 21:25:00 -- scripts/common.sh@392 -- # return 1 00:03:09.285 21:25:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:09.285 1+0 records in 00:03:09.285 1+0 records out 00:03:09.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00153301 s, 684 MB/s 00:03:09.285 21:25:00 -- spdk/autotest.sh@118 -- # sync 00:03:09.543 21:25:00 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:09.543 21:25:00 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:09.543 21:25:00 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:10.942 21:25:01 -- spdk/autotest.sh@124 -- # uname -s 00:03:10.942 21:25:01 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:10.942 21:25:01 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:10.942 21:25:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:10.942 21:25:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:10.942 21:25:01 -- common/autotest_common.sh@10 -- # set +x 00:03:10.942 ************************************ 00:03:10.942 START TEST setup.sh 00:03:10.942 ************************************ 00:03:10.942 21:25:01 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:11.200 * Looking for test storage... 00:03:11.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:11.200 21:25:01 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:11.200 21:25:01 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:11.200 21:25:01 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:11.200 21:25:01 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:11.200 21:25:01 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:11.200 21:25:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:11.200 ************************************ 00:03:11.200 START TEST acl 00:03:11.200 ************************************ 00:03:11.200 21:25:01 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:11.200 * Looking for test storage... 00:03:11.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:11.200 21:25:01 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:11.200 21:25:01 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:11.200 21:25:01 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:11.200 21:25:01 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:11.200 21:25:01 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:11.200 21:25:01 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:11.200 21:25:01 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:11.200 21:25:01 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:11.200 21:25:01 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:11.200 21:25:01 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:11.200 21:25:01 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:11.200 21:25:01 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:11.200 21:25:01 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:11.200 21:25:01 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:11.200 21:25:01 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:11.200 21:25:01 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:12.579 21:25:03 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:12.579 21:25:03 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:12.579 21:25:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.579 21:25:03 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:12.579 21:25:03 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.579 21:25:03 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:13.515 Hugepages 00:03:13.515 node hugesize free / total 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.515 00:03:13.515 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.515 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:84:00.0 == *:*:*.* ]] 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\4\:\0\0\.\0* ]] 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:13.516 21:25:04 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:13.516 21:25:04 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:13.516 21:25:04 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.516 21:25:04 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:13.516 ************************************ 00:03:13.516 START TEST denied 00:03:13.516 ************************************ 00:03:13.516 21:25:04 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:13.516 21:25:04 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:84:00.0' 00:03:13.516 21:25:04 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:84:00.0' 00:03:13.516 21:25:04 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:13.516 21:25:04 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.516 21:25:04 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:14.891 0000:84:00.0 (8086 0a54): Skipping denied controller at 0000:84:00.0 00:03:14.891 21:25:05 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:84:00.0 00:03:14.891 21:25:05 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:14.891 21:25:05 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:14.891 21:25:05 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:84:00.0 ]] 00:03:14.891 21:25:05 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:84:00.0/driver 00:03:14.891 21:25:05 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:14.891 21:25:05 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:14.891 21:25:05 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:14.891 21:25:05 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:14.891 21:25:05 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.428 00:03:17.428 real 0m3.588s 00:03:17.428 user 0m1.081s 00:03:17.428 sys 0m1.677s 00:03:17.428 21:25:07 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:17.428 21:25:07 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:17.428 ************************************ 00:03:17.428 END TEST denied 00:03:17.428 ************************************ 00:03:17.428 21:25:07 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:17.428 21:25:07 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:17.428 21:25:07 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:17.428 21:25:07 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:17.428 21:25:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:17.428 ************************************ 00:03:17.428 START TEST allowed 00:03:17.428 ************************************ 00:03:17.428 21:25:07 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:17.428 21:25:07 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:84:00.0 00:03:17.428 21:25:07 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:17.428 21:25:07 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.428 21:25:07 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:17.428 21:25:07 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:84:00.0 .*: nvme -> .*' 00:03:19.335 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:03:19.335 21:25:10 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:19.335 21:25:10 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:19.335 21:25:10 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:19.335 21:25:10 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:19.335 21:25:10 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:20.715 00:03:20.715 real 0m3.490s 00:03:20.715 user 0m0.948s 00:03:20.715 sys 0m1.562s 00:03:20.715 21:25:11 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:20.715 21:25:11 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:20.715 ************************************ 00:03:20.715 END TEST allowed 00:03:20.715 ************************************ 00:03:20.715 21:25:11 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:20.715 00:03:20.715 real 0m9.663s 00:03:20.715 user 0m3.055s 00:03:20.715 sys 0m4.922s 00:03:20.715 21:25:11 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:20.715 21:25:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:20.715 ************************************ 00:03:20.715 END TEST acl 00:03:20.715 ************************************ 00:03:20.715 21:25:11 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:20.715 21:25:11 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:20.715 21:25:11 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.715 21:25:11 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.715 21:25:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:20.715 ************************************ 00:03:20.715 START TEST hugepages 00:03:20.715 ************************************ 00:03:20.715 21:25:11 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:20.975 * Looking for test storage... 00:03:20.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33013820 kB' 'MemAvailable: 36452700 kB' 'Buffers: 5520 kB' 'Cached: 13137772 kB' 'SwapCached: 0 kB' 'Active: 10197480 kB' 'Inactive: 3430260 kB' 'Active(anon): 9797052 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487316 kB' 'Mapped: 151196 kB' 'Shmem: 9312604 kB' 'KReclaimable: 154152 kB' 'Slab: 381776 kB' 'SReclaimable: 154152 kB' 'SUnreclaim: 227624 kB' 'KernelStack: 10288 kB' 'PageTables: 7308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32437040 kB' 'Committed_AS: 10786400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187196 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.975 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.976 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:20.977 21:25:11 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:20.977 21:25:11 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.977 21:25:11 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.977 21:25:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:20.977 ************************************ 00:03:20.977 START TEST default_setup 00:03:20.977 ************************************ 00:03:20.977 21:25:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:20.977 21:25:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:20.977 21:25:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:20.977 21:25:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:20.977 21:25:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:20.977 21:25:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:20.977 21:25:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:20.977 21:25:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.977 21:25:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:20.977 21:25:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:20.977 21:25:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:20.977 21:25:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.977 21:25:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:20.977 21:25:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.977 21:25:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.977 21:25:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.977 21:25:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:20.977 21:25:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:20.977 21:25:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:20.977 21:25:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:20.977 21:25:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:20.977 21:25:11 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.977 21:25:11 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.913 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:21.913 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:21.913 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:21.913 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:21.913 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:21.913 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:21.913 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:22.172 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:22.172 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:22.172 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:22.172 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:22.172 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:22.172 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:22.172 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:22.172 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:22.172 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:23.113 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 35149688 kB' 'MemAvailable: 38588552 kB' 'Buffers: 5520 kB' 'Cached: 13137852 kB' 'SwapCached: 0 kB' 'Active: 10215524 kB' 'Inactive: 3430260 kB' 'Active(anon): 9815096 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505536 kB' 'Mapped: 151244 kB' 'Shmem: 9312684 kB' 'KReclaimable: 154116 kB' 'Slab: 381560 kB' 'SReclaimable: 154116 kB' 'SUnreclaim: 227444 kB' 'KernelStack: 10240 kB' 'PageTables: 7456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 10805740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187196 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.114 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 35149972 kB' 'MemAvailable: 38588836 kB' 'Buffers: 5520 kB' 'Cached: 13137852 kB' 'SwapCached: 0 kB' 'Active: 10216260 kB' 'Inactive: 3430260 kB' 'Active(anon): 9815832 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505600 kB' 'Mapped: 151244 kB' 'Shmem: 9312684 kB' 'KReclaimable: 154116 kB' 'Slab: 381496 kB' 'SReclaimable: 154116 kB' 'SUnreclaim: 227380 kB' 'KernelStack: 10384 kB' 'PageTables: 7740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 10805760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187308 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.115 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.116 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 35152240 kB' 'MemAvailable: 38591104 kB' 'Buffers: 5520 kB' 'Cached: 13137872 kB' 'SwapCached: 0 kB' 'Active: 10214924 kB' 'Inactive: 3430260 kB' 'Active(anon): 9814496 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504916 kB' 'Mapped: 151244 kB' 'Shmem: 9312704 kB' 'KReclaimable: 154116 kB' 'Slab: 381560 kB' 'SReclaimable: 154116 kB' 'SUnreclaim: 227444 kB' 'KernelStack: 10256 kB' 'PageTables: 7368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 10803548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187180 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.117 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:23.118 nr_hugepages=1024 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.118 resv_hugepages=0 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.118 surplus_hugepages=0 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.118 anon_hugepages=0 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.118 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 35151988 kB' 'MemAvailable: 38590852 kB' 'Buffers: 5520 kB' 'Cached: 13137876 kB' 'SwapCached: 0 kB' 'Active: 10214352 kB' 'Inactive: 3430260 kB' 'Active(anon): 9813924 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504396 kB' 'Mapped: 151220 kB' 'Shmem: 9312708 kB' 'KReclaimable: 154116 kB' 'Slab: 381564 kB' 'SReclaimable: 154116 kB' 'SUnreclaim: 227448 kB' 'KernelStack: 10160 kB' 'PageTables: 7108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 10803572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187180 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.119 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 20089988 kB' 'MemUsed: 12744704 kB' 'SwapCached: 0 kB' 'Active: 6623592 kB' 'Inactive: 3336932 kB' 'Active(anon): 6462264 kB' 'Inactive(anon): 0 kB' 'Active(file): 161328 kB' 'Inactive(file): 3336932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9577840 kB' 'Mapped: 72540 kB' 'AnonPages: 385812 kB' 'Shmem: 6079580 kB' 'KernelStack: 5480 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89740 kB' 'Slab: 218356 kB' 'SReclaimable: 89740 kB' 'SUnreclaim: 128616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.120 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:23.121 node0=1024 expecting 1024 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:23.121 00:03:23.121 real 0m2.224s 00:03:23.121 user 0m0.643s 00:03:23.121 sys 0m0.759s 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:23.121 21:25:13 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:23.121 ************************************ 00:03:23.121 END TEST default_setup 00:03:23.121 ************************************ 00:03:23.121 21:25:13 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:23.121 21:25:13 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:23.121 21:25:13 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:23.121 21:25:13 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.121 21:25:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:23.381 ************************************ 00:03:23.381 START TEST per_node_1G_alloc 00:03:23.381 ************************************ 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.381 21:25:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.321 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:24.321 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:24.321 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:24.321 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:24.321 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:24.321 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:24.321 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:24.321 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:24.321 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:24.321 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:24.321 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:24.321 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:24.321 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:24.321 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:24.321 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:24.321 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:24.321 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:24.321 21:25:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:24.321 21:25:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:24.321 21:25:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:24.321 21:25:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.321 21:25:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.321 21:25:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:24.321 21:25:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:24.321 21:25:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:24.321 21:25:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 35133516 kB' 'MemAvailable: 38572380 kB' 'Buffers: 5520 kB' 'Cached: 13137964 kB' 'SwapCached: 0 kB' 'Active: 10214512 kB' 'Inactive: 3430260 kB' 'Active(anon): 9814084 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504512 kB' 'Mapped: 151292 kB' 'Shmem: 9312796 kB' 'KReclaimable: 154116 kB' 'Slab: 381576 kB' 'SReclaimable: 154116 kB' 'SUnreclaim: 227460 kB' 'KernelStack: 10160 kB' 'PageTables: 7044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 10803908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187180 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.321 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.322 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 35133900 kB' 'MemAvailable: 38572764 kB' 'Buffers: 5520 kB' 'Cached: 13137964 kB' 'SwapCached: 0 kB' 'Active: 10215212 kB' 'Inactive: 3430260 kB' 'Active(anon): 9814784 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505212 kB' 'Mapped: 151236 kB' 'Shmem: 9312796 kB' 'KReclaimable: 154116 kB' 'Slab: 381560 kB' 'SReclaimable: 154116 kB' 'SUnreclaim: 227444 kB' 'KernelStack: 10208 kB' 'PageTables: 7168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 10803928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187148 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.323 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.324 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 35134000 kB' 'MemAvailable: 38572864 kB' 'Buffers: 5520 kB' 'Cached: 13137980 kB' 'SwapCached: 0 kB' 'Active: 10215080 kB' 'Inactive: 3430260 kB' 'Active(anon): 9814652 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505016 kB' 'Mapped: 151160 kB' 'Shmem: 9312812 kB' 'KReclaimable: 154116 kB' 'Slab: 381560 kB' 'SReclaimable: 154116 kB' 'SUnreclaim: 227444 kB' 'KernelStack: 10192 kB' 'PageTables: 7120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 10803952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187148 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.325 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:24.326 nr_hugepages=1024 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.326 resv_hugepages=0 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.326 surplus_hugepages=0 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.326 anon_hugepages=0 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.326 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 35134000 kB' 'MemAvailable: 38572864 kB' 'Buffers: 5520 kB' 'Cached: 13138004 kB' 'SwapCached: 0 kB' 'Active: 10215112 kB' 'Inactive: 3430260 kB' 'Active(anon): 9814684 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505056 kB' 'Mapped: 151160 kB' 'Shmem: 9312836 kB' 'KReclaimable: 154116 kB' 'Slab: 381560 kB' 'SReclaimable: 154116 kB' 'SUnreclaim: 227444 kB' 'KernelStack: 10208 kB' 'PageTables: 7164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 10803972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187148 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.327 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.328 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 21127112 kB' 'MemUsed: 11707580 kB' 'SwapCached: 0 kB' 'Active: 6624344 kB' 'Inactive: 3336932 kB' 'Active(anon): 6463016 kB' 'Inactive(anon): 0 kB' 'Active(file): 161328 kB' 'Inactive(file): 3336932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9577920 kB' 'Mapped: 72556 kB' 'AnonPages: 386552 kB' 'Shmem: 6079660 kB' 'KernelStack: 5512 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89740 kB' 'Slab: 218264 kB' 'SReclaimable: 89740 kB' 'SUnreclaim: 128524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.590 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.591 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19456488 kB' 'MemFree: 14007740 kB' 'MemUsed: 5448748 kB' 'SwapCached: 0 kB' 'Active: 3590788 kB' 'Inactive: 93328 kB' 'Active(anon): 3351688 kB' 'Inactive(anon): 0 kB' 'Active(file): 239100 kB' 'Inactive(file): 93328 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3565628 kB' 'Mapped: 78604 kB' 'AnonPages: 118492 kB' 'Shmem: 3233200 kB' 'KernelStack: 4696 kB' 'PageTables: 3288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64376 kB' 'Slab: 163296 kB' 'SReclaimable: 64376 kB' 'SUnreclaim: 98920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.592 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.593 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:24.594 node0=512 expecting 512 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:24.594 node1=512 expecting 512 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:24.594 00:03:24.594 real 0m1.252s 00:03:24.594 user 0m0.587s 00:03:24.594 sys 0m0.699s 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:24.594 21:25:15 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:24.594 ************************************ 00:03:24.594 END TEST per_node_1G_alloc 00:03:24.594 ************************************ 00:03:24.594 21:25:15 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:24.594 21:25:15 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:24.594 21:25:15 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.594 21:25:15 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.594 21:25:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:24.594 ************************************ 00:03:24.594 START TEST even_2G_alloc 00:03:24.594 ************************************ 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.594 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:24.595 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:24.595 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:24.595 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.595 21:25:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:25.535 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:25.535 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:25.535 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:25.535 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:25.535 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:25.535 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:25.535 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:25.535 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:25.535 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:25.535 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:25.535 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:25.535 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:25.535 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:25.535 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:25.535 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:25.535 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:25.535 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 35150248 kB' 'MemAvailable: 38589112 kB' 'Buffers: 5520 kB' 'Cached: 13138100 kB' 'SwapCached: 0 kB' 'Active: 10215184 kB' 'Inactive: 3430260 kB' 'Active(anon): 9814756 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504996 kB' 'Mapped: 151324 kB' 'Shmem: 9312932 kB' 'KReclaimable: 154116 kB' 'Slab: 381492 kB' 'SReclaimable: 154116 kB' 'SUnreclaim: 227376 kB' 'KernelStack: 10224 kB' 'PageTables: 7252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 10804048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187164 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.535 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 35150912 kB' 'MemAvailable: 38589776 kB' 'Buffers: 5520 kB' 'Cached: 13138104 kB' 'SwapCached: 0 kB' 'Active: 10215528 kB' 'Inactive: 3430260 kB' 'Active(anon): 9815100 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505320 kB' 'Mapped: 151252 kB' 'Shmem: 9312936 kB' 'KReclaimable: 154116 kB' 'Slab: 381488 kB' 'SReclaimable: 154116 kB' 'SUnreclaim: 227372 kB' 'KernelStack: 10192 kB' 'PageTables: 7136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 10804064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187116 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.536 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.537 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 35151544 kB' 'MemAvailable: 38590408 kB' 'Buffers: 5520 kB' 'Cached: 13138120 kB' 'SwapCached: 0 kB' 'Active: 10215192 kB' 'Inactive: 3430260 kB' 'Active(anon): 9814764 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505024 kB' 'Mapped: 151252 kB' 'Shmem: 9312952 kB' 'KReclaimable: 154116 kB' 'Slab: 381544 kB' 'SReclaimable: 154116 kB' 'SUnreclaim: 227428 kB' 'KernelStack: 10192 kB' 'PageTables: 7160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 10804084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187116 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.538 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.539 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.540 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:25.804 nr_hugepages=1024 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:25.804 resv_hugepages=0 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:25.804 surplus_hugepages=0 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:25.804 anon_hugepages=0 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 35150788 kB' 'MemAvailable: 38589652 kB' 'Buffers: 5520 kB' 'Cached: 13138124 kB' 'SwapCached: 0 kB' 'Active: 10214900 kB' 'Inactive: 3430260 kB' 'Active(anon): 9814472 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504752 kB' 'Mapped: 151252 kB' 'Shmem: 9312956 kB' 'KReclaimable: 154116 kB' 'Slab: 381544 kB' 'SReclaimable: 154116 kB' 'SUnreclaim: 227428 kB' 'KernelStack: 10192 kB' 'PageTables: 7160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 10804108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187116 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.804 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.805 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 21135580 kB' 'MemUsed: 11699112 kB' 'SwapCached: 0 kB' 'Active: 6624400 kB' 'Inactive: 3336932 kB' 'Active(anon): 6463072 kB' 'Inactive(anon): 0 kB' 'Active(file): 161328 kB' 'Inactive(file): 3336932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9577984 kB' 'Mapped: 72628 kB' 'AnonPages: 386488 kB' 'Shmem: 6079724 kB' 'KernelStack: 5464 kB' 'PageTables: 3784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89740 kB' 'Slab: 218236 kB' 'SReclaimable: 89740 kB' 'SUnreclaim: 128496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.806 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19456488 kB' 'MemFree: 14018444 kB' 'MemUsed: 5438044 kB' 'SwapCached: 0 kB' 'Active: 3590956 kB' 'Inactive: 93328 kB' 'Active(anon): 3351856 kB' 'Inactive(anon): 0 kB' 'Active(file): 239100 kB' 'Inactive(file): 93328 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3565700 kB' 'Mapped: 78680 kB' 'AnonPages: 118668 kB' 'Shmem: 3233272 kB' 'KernelStack: 4696 kB' 'PageTables: 3328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64376 kB' 'Slab: 163300 kB' 'SReclaimable: 64376 kB' 'SUnreclaim: 98924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.807 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.808 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:25.809 node0=512 expecting 512 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:25.809 node1=512 expecting 512 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:25.809 00:03:25.809 real 0m1.191s 00:03:25.809 user 0m0.529s 00:03:25.809 sys 0m0.699s 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:25.809 21:25:16 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:25.809 ************************************ 00:03:25.809 END TEST even_2G_alloc 00:03:25.809 ************************************ 00:03:25.809 21:25:16 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:25.809 21:25:16 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:25.809 21:25:16 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.809 21:25:16 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.809 21:25:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:25.809 ************************************ 00:03:25.809 START TEST odd_alloc 00:03:25.809 ************************************ 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.809 21:25:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.748 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:26.748 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:26.748 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:26.748 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:26.748 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:26.748 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:26.748 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:26.748 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:26.748 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:26.748 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:26.748 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:26.748 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:26.748 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:26.748 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:26.748 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:26.748 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:26.748 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 35165212 kB' 'MemAvailable: 38604076 kB' 'Buffers: 5520 kB' 'Cached: 13138232 kB' 'SwapCached: 0 kB' 'Active: 10212460 kB' 'Inactive: 3430260 kB' 'Active(anon): 9812032 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502156 kB' 'Mapped: 150120 kB' 'Shmem: 9313064 kB' 'KReclaimable: 154116 kB' 'Slab: 381500 kB' 'SReclaimable: 154116 kB' 'SUnreclaim: 227384 kB' 'KernelStack: 10128 kB' 'PageTables: 6864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 10790016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187100 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.748 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.749 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 35164204 kB' 'MemAvailable: 38603068 kB' 'Buffers: 5520 kB' 'Cached: 13138236 kB' 'SwapCached: 0 kB' 'Active: 10212336 kB' 'Inactive: 3430260 kB' 'Active(anon): 9811908 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502100 kB' 'Mapped: 150180 kB' 'Shmem: 9313068 kB' 'KReclaimable: 154116 kB' 'Slab: 381508 kB' 'SReclaimable: 154116 kB' 'SUnreclaim: 227392 kB' 'KernelStack: 10128 kB' 'PageTables: 6856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 10790036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187068 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.750 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.751 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.014 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 35164204 kB' 'MemAvailable: 38603068 kB' 'Buffers: 5520 kB' 'Cached: 13138252 kB' 'SwapCached: 0 kB' 'Active: 10212604 kB' 'Inactive: 3430260 kB' 'Active(anon): 9812176 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502244 kB' 'Mapped: 150104 kB' 'Shmem: 9313084 kB' 'KReclaimable: 154116 kB' 'Slab: 381524 kB' 'SReclaimable: 154116 kB' 'SUnreclaim: 227408 kB' 'KernelStack: 10160 kB' 'PageTables: 6956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 10790056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187036 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.015 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.016 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:27.017 nr_hugepages=1025 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.017 resv_hugepages=0 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.017 surplus_hugepages=0 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.017 anon_hugepages=0 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 35163952 kB' 'MemAvailable: 38602816 kB' 'Buffers: 5520 kB' 'Cached: 13138252 kB' 'SwapCached: 0 kB' 'Active: 10212004 kB' 'Inactive: 3430260 kB' 'Active(anon): 9811576 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501716 kB' 'Mapped: 150104 kB' 'Shmem: 9313084 kB' 'KReclaimable: 154116 kB' 'Slab: 381516 kB' 'SReclaimable: 154116 kB' 'SUnreclaim: 227400 kB' 'KernelStack: 10128 kB' 'PageTables: 6856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 10790076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187004 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.017 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:27.018 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 21137600 kB' 'MemUsed: 11697092 kB' 'SwapCached: 0 kB' 'Active: 6621820 kB' 'Inactive: 3336932 kB' 'Active(anon): 6460492 kB' 'Inactive(anon): 0 kB' 'Active(file): 161328 kB' 'Inactive(file): 3336932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9578072 kB' 'Mapped: 72248 kB' 'AnonPages: 383876 kB' 'Shmem: 6079812 kB' 'KernelStack: 5448 kB' 'PageTables: 3564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89740 kB' 'Slab: 218156 kB' 'SReclaimable: 89740 kB' 'SUnreclaim: 128416 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.019 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19456488 kB' 'MemFree: 14026352 kB' 'MemUsed: 5430136 kB' 'SwapCached: 0 kB' 'Active: 3590496 kB' 'Inactive: 93328 kB' 'Active(anon): 3351396 kB' 'Inactive(anon): 0 kB' 'Active(file): 239100 kB' 'Inactive(file): 93328 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3565744 kB' 'Mapped: 77856 kB' 'AnonPages: 118112 kB' 'Shmem: 3233316 kB' 'KernelStack: 4680 kB' 'PageTables: 3292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64376 kB' 'Slab: 163360 kB' 'SReclaimable: 64376 kB' 'SUnreclaim: 98984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.020 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:27.021 node0=512 expecting 513 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:27.021 node1=513 expecting 512 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:27.021 00:03:27.021 real 0m1.186s 00:03:27.021 user 0m0.538s 00:03:27.021 sys 0m0.680s 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:27.021 21:25:17 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:27.021 ************************************ 00:03:27.021 END TEST odd_alloc 00:03:27.021 ************************************ 00:03:27.021 21:25:17 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:27.021 21:25:17 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:27.021 21:25:17 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.021 21:25:17 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.021 21:25:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:27.021 ************************************ 00:03:27.021 START TEST custom_alloc 00:03:27.021 ************************************ 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.021 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.022 21:25:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:27.961 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:27.961 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:27.961 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:27.961 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:27.961 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:27.961 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:27.961 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:27.961 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:27.961 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:27.961 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:27.961 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:27.961 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:27.961 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:27.961 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:27.961 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:28.225 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:28.225 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:28.225 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:28.225 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:28.225 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:28.225 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.225 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.225 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:28.225 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:28.225 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:28.225 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.225 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.225 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.225 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:28.225 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 34117904 kB' 'MemAvailable: 37556768 kB' 'Buffers: 5520 kB' 'Cached: 13138356 kB' 'SwapCached: 0 kB' 'Active: 10212612 kB' 'Inactive: 3430260 kB' 'Active(anon): 9812184 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502200 kB' 'Mapped: 150128 kB' 'Shmem: 9313188 kB' 'KReclaimable: 154116 kB' 'Slab: 381484 kB' 'SReclaimable: 154116 kB' 'SUnreclaim: 227368 kB' 'KernelStack: 10128 kB' 'PageTables: 6848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 10790140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187068 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.226 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 34118096 kB' 'MemAvailable: 37556960 kB' 'Buffers: 5520 kB' 'Cached: 13138360 kB' 'SwapCached: 0 kB' 'Active: 10212256 kB' 'Inactive: 3430260 kB' 'Active(anon): 9811828 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501832 kB' 'Mapped: 150120 kB' 'Shmem: 9313192 kB' 'KReclaimable: 154116 kB' 'Slab: 381468 kB' 'SReclaimable: 154116 kB' 'SUnreclaim: 227352 kB' 'KernelStack: 10112 kB' 'PageTables: 6772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 10790160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187036 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.227 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.228 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 34117440 kB' 'MemAvailable: 37556304 kB' 'Buffers: 5520 kB' 'Cached: 13138376 kB' 'SwapCached: 0 kB' 'Active: 10212476 kB' 'Inactive: 3430260 kB' 'Active(anon): 9812048 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502028 kB' 'Mapped: 150120 kB' 'Shmem: 9313208 kB' 'KReclaimable: 154116 kB' 'Slab: 381508 kB' 'SReclaimable: 154116 kB' 'SUnreclaim: 227392 kB' 'KernelStack: 10128 kB' 'PageTables: 6836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 10790180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187036 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.229 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.230 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:28.231 nr_hugepages=1536 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.231 resv_hugepages=0 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.231 surplus_hugepages=0 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:28.231 anon_hugepages=0 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 34116180 kB' 'MemAvailable: 37555044 kB' 'Buffers: 5520 kB' 'Cached: 13138400 kB' 'SwapCached: 0 kB' 'Active: 10212316 kB' 'Inactive: 3430260 kB' 'Active(anon): 9811888 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501820 kB' 'Mapped: 150120 kB' 'Shmem: 9313232 kB' 'KReclaimable: 154116 kB' 'Slab: 381508 kB' 'SReclaimable: 154116 kB' 'SUnreclaim: 227392 kB' 'KernelStack: 10112 kB' 'PageTables: 6792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 10790200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187036 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.231 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.232 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 21133928 kB' 'MemUsed: 11700764 kB' 'SwapCached: 0 kB' 'Active: 6621192 kB' 'Inactive: 3336932 kB' 'Active(anon): 6459864 kB' 'Inactive(anon): 0 kB' 'Active(file): 161328 kB' 'Inactive(file): 3336932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9578144 kB' 'Mapped: 72264 kB' 'AnonPages: 383108 kB' 'Shmem: 6079884 kB' 'KernelStack: 5448 kB' 'PageTables: 3468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89740 kB' 'Slab: 218060 kB' 'SReclaimable: 89740 kB' 'SUnreclaim: 128320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.233 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.234 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19456488 kB' 'MemFree: 12981748 kB' 'MemUsed: 6474740 kB' 'SwapCached: 0 kB' 'Active: 3591284 kB' 'Inactive: 93328 kB' 'Active(anon): 3352184 kB' 'Inactive(anon): 0 kB' 'Active(file): 239100 kB' 'Inactive(file): 93328 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3565816 kB' 'Mapped: 77856 kB' 'AnonPages: 118920 kB' 'Shmem: 3233388 kB' 'KernelStack: 4680 kB' 'PageTables: 3368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64376 kB' 'Slab: 163448 kB' 'SReclaimable: 64376 kB' 'SUnreclaim: 99072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.235 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:28.236 node0=512 expecting 512 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:28.236 node1=1024 expecting 1024 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:28.236 00:03:28.236 real 0m1.283s 00:03:28.236 user 0m0.559s 00:03:28.236 sys 0m0.758s 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:28.236 21:25:18 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:28.236 ************************************ 00:03:28.236 END TEST custom_alloc 00:03:28.236 ************************************ 00:03:28.495 21:25:19 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:28.495 21:25:19 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:28.495 21:25:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:28.495 21:25:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.495 21:25:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:28.495 ************************************ 00:03:28.495 START TEST no_shrink_alloc 00:03:28.495 ************************************ 00:03:28.495 21:25:19 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:28.495 21:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:28.495 21:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:28.495 21:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:28.495 21:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:28.495 21:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:28.495 21:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:28.495 21:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.495 21:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:28.495 21:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:28.495 21:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:28.495 21:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.495 21:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:28.495 21:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:28.495 21:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.495 21:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.495 21:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:28.495 21:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:28.495 21:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:28.495 21:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:28.495 21:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:28.495 21:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.495 21:25:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:29.433 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:29.433 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:29.433 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:29.433 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:29.433 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:29.433 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:29.433 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:29.434 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:29.434 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:29.434 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:29.434 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:29.434 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:29.434 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:29.434 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:29.434 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:29.434 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:29.434 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 35150936 kB' 'MemAvailable: 38589788 kB' 'Buffers: 5520 kB' 'Cached: 13138484 kB' 'SwapCached: 0 kB' 'Active: 10212792 kB' 'Inactive: 3430260 kB' 'Active(anon): 9812364 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502172 kB' 'Mapped: 150132 kB' 'Shmem: 9313316 kB' 'KReclaimable: 154092 kB' 'Slab: 380972 kB' 'SReclaimable: 154092 kB' 'SUnreclaim: 226880 kB' 'KernelStack: 10112 kB' 'PageTables: 6804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 10790596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187100 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.434 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 35150688 kB' 'MemAvailable: 38589540 kB' 'Buffers: 5520 kB' 'Cached: 13138488 kB' 'SwapCached: 0 kB' 'Active: 10212652 kB' 'Inactive: 3430260 kB' 'Active(anon): 9812224 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502032 kB' 'Mapped: 150128 kB' 'Shmem: 9313320 kB' 'KReclaimable: 154092 kB' 'Slab: 380964 kB' 'SReclaimable: 154092 kB' 'SUnreclaim: 226872 kB' 'KernelStack: 10128 kB' 'PageTables: 6812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 10790612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187068 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.435 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.436 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.699 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.699 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.699 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.700 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.701 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 35150688 kB' 'MemAvailable: 38589540 kB' 'Buffers: 5520 kB' 'Cached: 13138492 kB' 'SwapCached: 0 kB' 'Active: 10212764 kB' 'Inactive: 3430260 kB' 'Active(anon): 9812336 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502164 kB' 'Mapped: 150128 kB' 'Shmem: 9313324 kB' 'KReclaimable: 154092 kB' 'Slab: 380964 kB' 'SReclaimable: 154092 kB' 'SUnreclaim: 226872 kB' 'KernelStack: 10144 kB' 'PageTables: 6856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 10790636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187068 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.702 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.703 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:29.704 nr_hugepages=1024 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.704 resv_hugepages=0 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.704 surplus_hugepages=0 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.704 anon_hugepages=0 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 35150724 kB' 'MemAvailable: 38589576 kB' 'Buffers: 5520 kB' 'Cached: 13138504 kB' 'SwapCached: 0 kB' 'Active: 10212008 kB' 'Inactive: 3430260 kB' 'Active(anon): 9811580 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501420 kB' 'Mapped: 150128 kB' 'Shmem: 9313336 kB' 'KReclaimable: 154092 kB' 'Slab: 380952 kB' 'SReclaimable: 154092 kB' 'SUnreclaim: 226860 kB' 'KernelStack: 10128 kB' 'PageTables: 6840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 10790660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187068 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.704 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.705 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.706 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 20083668 kB' 'MemUsed: 12751024 kB' 'SwapCached: 0 kB' 'Active: 6622272 kB' 'Inactive: 3336932 kB' 'Active(anon): 6460944 kB' 'Inactive(anon): 0 kB' 'Active(file): 161328 kB' 'Inactive(file): 3336932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9578172 kB' 'Mapped: 72272 kB' 'AnonPages: 384188 kB' 'Shmem: 6079912 kB' 'KernelStack: 5480 kB' 'PageTables: 3556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89740 kB' 'Slab: 217944 kB' 'SReclaimable: 89740 kB' 'SUnreclaim: 128204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.707 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.708 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:29.709 node0=1024 expecting 1024 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.709 21:25:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.676 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:30.676 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:30.676 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:30.676 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:30.676 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:30.676 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:30.676 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:30.676 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:30.676 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:30.676 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:30.676 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:30.676 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:30.676 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:30.676 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:30.676 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:30.676 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:30.676 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:30.676 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 35126932 kB' 'MemAvailable: 38565784 kB' 'Buffers: 5520 kB' 'Cached: 13138588 kB' 'SwapCached: 0 kB' 'Active: 10211912 kB' 'Inactive: 3430260 kB' 'Active(anon): 9811484 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501216 kB' 'Mapped: 150300 kB' 'Shmem: 9313420 kB' 'KReclaimable: 154092 kB' 'Slab: 381224 kB' 'SReclaimable: 154092 kB' 'SUnreclaim: 227132 kB' 'KernelStack: 10112 kB' 'PageTables: 6812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 10790864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187132 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.676 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 35127212 kB' 'MemAvailable: 38566064 kB' 'Buffers: 5520 kB' 'Cached: 13138592 kB' 'SwapCached: 0 kB' 'Active: 10212032 kB' 'Inactive: 3430260 kB' 'Active(anon): 9811604 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501420 kB' 'Mapped: 150220 kB' 'Shmem: 9313424 kB' 'KReclaimable: 154092 kB' 'Slab: 381224 kB' 'SReclaimable: 154092 kB' 'SUnreclaim: 227132 kB' 'KernelStack: 10128 kB' 'PageTables: 6852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 10790880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187100 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.677 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.678 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 35127580 kB' 'MemAvailable: 38566432 kB' 'Buffers: 5520 kB' 'Cached: 13138612 kB' 'SwapCached: 0 kB' 'Active: 10211940 kB' 'Inactive: 3430260 kB' 'Active(anon): 9811512 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501264 kB' 'Mapped: 150144 kB' 'Shmem: 9313444 kB' 'KReclaimable: 154092 kB' 'Slab: 381232 kB' 'SReclaimable: 154092 kB' 'SUnreclaim: 227140 kB' 'KernelStack: 10128 kB' 'PageTables: 6844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 10790904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187100 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.679 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.680 nr_hugepages=1024 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.680 resv_hugepages=0 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.680 surplus_hugepages=0 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.680 anon_hugepages=0 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 35127328 kB' 'MemAvailable: 38566180 kB' 'Buffers: 5520 kB' 'Cached: 13138632 kB' 'SwapCached: 0 kB' 'Active: 10212004 kB' 'Inactive: 3430260 kB' 'Active(anon): 9811576 kB' 'Inactive(anon): 0 kB' 'Active(file): 400428 kB' 'Inactive(file): 3430260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501264 kB' 'Mapped: 150144 kB' 'Shmem: 9313464 kB' 'KReclaimable: 154092 kB' 'Slab: 381232 kB' 'SReclaimable: 154092 kB' 'SUnreclaim: 227140 kB' 'KernelStack: 10128 kB' 'PageTables: 6844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 10790924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 187116 kB' 'VmallocChunk: 0 kB' 'Percpu: 19328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 387364 kB' 'DirectMap2M: 12118016 kB' 'DirectMap1G: 48234496 kB' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.680 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.681 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 20085964 kB' 'MemUsed: 12748728 kB' 'SwapCached: 0 kB' 'Active: 6621708 kB' 'Inactive: 3336932 kB' 'Active(anon): 6460380 kB' 'Inactive(anon): 0 kB' 'Active(file): 161328 kB' 'Inactive(file): 3336932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9578320 kB' 'Mapped: 72312 kB' 'AnonPages: 383408 kB' 'Shmem: 6080060 kB' 'KernelStack: 5416 kB' 'PageTables: 3376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89740 kB' 'Slab: 218020 kB' 'SReclaimable: 89740 kB' 'SUnreclaim: 128280 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.682 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.683 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.683 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.683 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.683 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.683 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.683 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.683 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.683 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.683 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.683 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.940 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.940 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.940 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.940 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.940 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:30.940 node0=1024 expecting 1024 00:03:30.940 21:25:21 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:30.940 00:03:30.940 real 0m2.415s 00:03:30.940 user 0m1.089s 00:03:30.940 sys 0m1.395s 00:03:30.940 21:25:21 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.940 21:25:21 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:30.940 ************************************ 00:03:30.940 END TEST no_shrink_alloc 00:03:30.940 ************************************ 00:03:30.940 21:25:21 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:30.940 21:25:21 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:30.940 21:25:21 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:30.940 21:25:21 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:30.940 21:25:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.940 21:25:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.940 21:25:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.940 21:25:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.940 21:25:21 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:30.940 21:25:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.940 21:25:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.940 21:25:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.940 21:25:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.940 21:25:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:30.940 21:25:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:30.940 00:03:30.940 real 0m9.997s 00:03:30.940 user 0m4.114s 00:03:30.940 sys 0m5.287s 00:03:30.940 21:25:21 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.940 21:25:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:30.940 ************************************ 00:03:30.940 END TEST hugepages 00:03:30.940 ************************************ 00:03:30.940 21:25:21 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:30.940 21:25:21 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:30.940 21:25:21 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.940 21:25:21 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.940 21:25:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:30.940 ************************************ 00:03:30.940 START TEST driver 00:03:30.940 ************************************ 00:03:30.941 21:25:21 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:30.941 * Looking for test storage... 00:03:30.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:30.941 21:25:21 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:30.941 21:25:21 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.941 21:25:21 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:33.473 21:25:23 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:33.473 21:25:23 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.473 21:25:23 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.473 21:25:23 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:33.473 ************************************ 00:03:33.473 START TEST guess_driver 00:03:33.473 ************************************ 00:03:33.473 21:25:23 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:33.473 21:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:33.473 21:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:33.473 21:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:33.473 21:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:33.473 21:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:33.473 21:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:33.473 21:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:33.473 21:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:33.473 21:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:33.473 21:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 102 > 0 )) 00:03:33.473 21:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:33.473 21:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:33.473 21:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:33.473 21:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:33.473 21:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:33.473 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:33.473 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:33.473 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:33.473 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:33.473 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:33.473 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:33.473 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:33.473 21:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:33.473 21:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:33.473 21:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:33.474 21:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:33.474 21:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:33.474 Looking for driver=vfio-pci 00:03:33.474 21:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.474 21:25:23 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:33.474 21:25:23 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.474 21:25:23 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:34.410 21:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.410 21:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.410 21:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.410 21:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.410 21:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.410 21:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.410 21:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.410 21:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.410 21:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.410 21:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.410 21:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.410 21:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.410 21:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.410 21:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.410 21:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.410 21:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.410 21:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.410 21:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.410 21:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.410 21:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.410 21:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.410 21:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.410 21:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.410 21:25:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.410 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.410 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.410 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.410 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.410 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.410 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.410 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.410 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.410 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.410 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.410 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.410 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.410 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.410 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.410 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.410 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.410 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.410 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.410 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.410 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.410 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.410 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.410 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.410 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.347 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.347 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:35.347 21:25:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.347 21:25:26 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:35.347 21:25:26 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:35.347 21:25:26 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:35.347 21:25:26 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.886 00:03:37.886 real 0m4.316s 00:03:37.886 user 0m0.907s 00:03:37.886 sys 0m1.627s 00:03:37.886 21:25:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.886 21:25:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:37.886 ************************************ 00:03:37.886 END TEST guess_driver 00:03:37.886 ************************************ 00:03:37.886 21:25:28 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:37.886 00:03:37.886 real 0m6.705s 00:03:37.886 user 0m1.462s 00:03:37.886 sys 0m2.523s 00:03:37.886 21:25:28 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.886 21:25:28 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:37.886 ************************************ 00:03:37.886 END TEST driver 00:03:37.886 ************************************ 00:03:37.886 21:25:28 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:37.886 21:25:28 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:37.886 21:25:28 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:37.886 21:25:28 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.886 21:25:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:37.886 ************************************ 00:03:37.886 START TEST devices 00:03:37.886 ************************************ 00:03:37.886 21:25:28 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:37.886 * Looking for test storage... 00:03:37.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:37.886 21:25:28 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:37.886 21:25:28 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:37.886 21:25:28 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:37.886 21:25:28 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:38.822 21:25:29 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:38.822 21:25:29 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:38.822 21:25:29 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:38.822 21:25:29 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:38.822 21:25:29 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:38.822 21:25:29 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:38.822 21:25:29 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:38.822 21:25:29 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:38.822 21:25:29 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:38.822 21:25:29 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:38.822 21:25:29 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:38.822 21:25:29 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:38.822 21:25:29 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:38.822 21:25:29 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:38.822 21:25:29 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:38.822 21:25:29 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:38.822 21:25:29 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:38.822 21:25:29 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:84:00.0 00:03:38.822 21:25:29 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\4\:\0\0\.\0* ]] 00:03:38.822 21:25:29 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:38.822 21:25:29 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:38.822 21:25:29 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:39.082 No valid GPT data, bailing 00:03:39.082 21:25:29 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:39.082 21:25:29 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:39.082 21:25:29 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:39.082 21:25:29 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:39.082 21:25:29 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:39.082 21:25:29 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:39.082 21:25:29 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:39.082 21:25:29 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:39.082 21:25:29 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:39.082 21:25:29 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:84:00.0 00:03:39.082 21:25:29 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:39.082 21:25:29 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:39.082 21:25:29 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:39.082 21:25:29 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.082 21:25:29 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.082 21:25:29 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:39.082 ************************************ 00:03:39.082 START TEST nvme_mount 00:03:39.082 ************************************ 00:03:39.082 21:25:29 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:39.082 21:25:29 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:39.082 21:25:29 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:39.082 21:25:29 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.082 21:25:29 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:39.082 21:25:29 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:39.082 21:25:29 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:39.082 21:25:29 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:39.082 21:25:29 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:39.082 21:25:29 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:39.082 21:25:29 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:39.082 21:25:29 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:39.082 21:25:29 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:39.082 21:25:29 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:39.082 21:25:29 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:39.082 21:25:29 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:39.082 21:25:29 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:39.082 21:25:29 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:39.082 21:25:29 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:39.082 21:25:29 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:40.019 Creating new GPT entries in memory. 00:03:40.019 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:40.019 other utilities. 00:03:40.019 21:25:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:40.019 21:25:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:40.019 21:25:30 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:40.019 21:25:30 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:40.019 21:25:30 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:40.957 Creating new GPT entries in memory. 00:03:40.957 The operation has completed successfully. 00:03:40.957 21:25:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:40.957 21:25:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:40.957 21:25:31 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 246780 00:03:40.957 21:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.957 21:25:31 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:40.957 21:25:31 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.957 21:25:31 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:40.957 21:25:31 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:41.215 21:25:31 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.215 21:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:84:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:41.215 21:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:41.215 21:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:41.215 21:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.215 21:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:41.215 21:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:41.215 21:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:41.215 21:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:41.215 21:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:41.215 21:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.215 21:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:41.215 21:25:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:41.215 21:25:31 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.215 21:25:31 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:42.155 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:42.155 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:42.155 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:42.155 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.155 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:42.155 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.155 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:42.155 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.155 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:42.155 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.155 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:42.155 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.155 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:42.155 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.155 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:42.155 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.155 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:42.155 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:42.156 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:42.156 21:25:32 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:42.416 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:42.416 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:42.416 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:42.416 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:42.416 21:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:42.416 21:25:33 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:42.416 21:25:33 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.416 21:25:33 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:42.416 21:25:33 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:42.416 21:25:33 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.676 21:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:84:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:42.676 21:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:42.676 21:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:42.676 21:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.676 21:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:42.676 21:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:42.676 21:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:42.676 21:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:42.676 21:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:42.676 21:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.676 21:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:42.676 21:25:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:42.676 21:25:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.676 21:25:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:43.611 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:84:00.0 data@nvme0n1 '' '' 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.612 21:25:34 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:44.550 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:44.550 00:03:44.550 real 0m5.647s 00:03:44.550 user 0m1.256s 00:03:44.550 sys 0m2.117s 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.550 21:25:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:44.550 ************************************ 00:03:44.550 END TEST nvme_mount 00:03:44.550 ************************************ 00:03:44.811 21:25:35 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:44.811 21:25:35 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:44.811 21:25:35 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.811 21:25:35 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.811 21:25:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:44.811 ************************************ 00:03:44.811 START TEST dm_mount 00:03:44.811 ************************************ 00:03:44.811 21:25:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:44.811 21:25:35 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:44.811 21:25:35 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:44.811 21:25:35 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:44.811 21:25:35 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:44.811 21:25:35 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:44.811 21:25:35 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:44.811 21:25:35 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:44.811 21:25:35 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:44.811 21:25:35 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:44.811 21:25:35 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:44.811 21:25:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:44.811 21:25:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.811 21:25:35 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:44.811 21:25:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:44.811 21:25:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.811 21:25:35 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:44.811 21:25:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:44.811 21:25:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.811 21:25:35 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:44.811 21:25:35 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:44.811 21:25:35 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:45.750 Creating new GPT entries in memory. 00:03:45.750 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:45.750 other utilities. 00:03:45.750 21:25:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:45.750 21:25:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:45.750 21:25:36 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:45.750 21:25:36 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:45.750 21:25:36 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:46.702 Creating new GPT entries in memory. 00:03:46.702 The operation has completed successfully. 00:03:46.702 21:25:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:46.702 21:25:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:46.702 21:25:37 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:46.702 21:25:37 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:46.702 21:25:37 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:48.082 The operation has completed successfully. 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 248559 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:84:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.082 21:25:38 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:49.016 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.016 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:49.016 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:84:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.017 21:25:39 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:49.955 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:50.215 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:50.215 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:50.215 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:50.215 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:50.215 21:25:40 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:50.215 00:03:50.215 real 0m5.422s 00:03:50.215 user 0m0.865s 00:03:50.215 sys 0m1.493s 00:03:50.215 21:25:40 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.215 21:25:40 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:50.215 ************************************ 00:03:50.215 END TEST dm_mount 00:03:50.215 ************************************ 00:03:50.215 21:25:40 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:50.215 21:25:40 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:50.215 21:25:40 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:50.215 21:25:40 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.215 21:25:40 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:50.215 21:25:40 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:50.215 21:25:40 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:50.215 21:25:40 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:50.474 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:50.474 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:50.474 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:50.474 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:50.474 21:25:41 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:50.474 21:25:41 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:50.474 21:25:41 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:50.474 21:25:41 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:50.474 21:25:41 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:50.474 21:25:41 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:50.474 21:25:41 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:50.474 00:03:50.474 real 0m12.827s 00:03:50.474 user 0m2.761s 00:03:50.474 sys 0m4.552s 00:03:50.474 21:25:41 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.475 21:25:41 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:50.475 ************************************ 00:03:50.475 END TEST devices 00:03:50.475 ************************************ 00:03:50.475 21:25:41 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:50.475 00:03:50.475 real 0m39.447s 00:03:50.475 user 0m11.493s 00:03:50.475 sys 0m17.450s 00:03:50.475 21:25:41 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.475 21:25:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:50.475 ************************************ 00:03:50.475 END TEST setup.sh 00:03:50.475 ************************************ 00:03:50.475 21:25:41 -- common/autotest_common.sh@1142 -- # return 0 00:03:50.475 21:25:41 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:51.414 Hugepages 00:03:51.414 node hugesize free / total 00:03:51.414 node0 1048576kB 0 / 0 00:03:51.672 node0 2048kB 2048 / 2048 00:03:51.672 node1 1048576kB 0 / 0 00:03:51.672 node1 2048kB 0 / 0 00:03:51.672 00:03:51.672 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:51.672 I/OAT 0000:00:04.0 8086 3c20 0 ioatdma - - 00:03:51.672 I/OAT 0000:00:04.1 8086 3c21 0 ioatdma - - 00:03:51.672 I/OAT 0000:00:04.2 8086 3c22 0 ioatdma - - 00:03:51.672 I/OAT 0000:00:04.3 8086 3c23 0 ioatdma - - 00:03:51.672 I/OAT 0000:00:04.4 8086 3c24 0 ioatdma - - 00:03:51.672 I/OAT 0000:00:04.5 8086 3c25 0 ioatdma - - 00:03:51.672 I/OAT 0000:00:04.6 8086 3c26 0 ioatdma - - 00:03:51.672 I/OAT 0000:00:04.7 8086 3c27 0 ioatdma - - 00:03:51.672 I/OAT 0000:80:04.0 8086 3c20 1 ioatdma - - 00:03:51.672 I/OAT 0000:80:04.1 8086 3c21 1 ioatdma - - 00:03:51.672 I/OAT 0000:80:04.2 8086 3c22 1 ioatdma - - 00:03:51.672 I/OAT 0000:80:04.3 8086 3c23 1 ioatdma - - 00:03:51.672 I/OAT 0000:80:04.4 8086 3c24 1 ioatdma - - 00:03:51.672 I/OAT 0000:80:04.5 8086 3c25 1 ioatdma - - 00:03:51.672 I/OAT 0000:80:04.6 8086 3c26 1 ioatdma - - 00:03:51.672 I/OAT 0000:80:04.7 8086 3c27 1 ioatdma - - 00:03:51.672 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:51.672 21:25:42 -- spdk/autotest.sh@130 -- # uname -s 00:03:51.672 21:25:42 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:51.672 21:25:42 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:51.672 21:25:42 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:52.612 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:52.612 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:52.612 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:52.872 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:52.872 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:52.872 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:52.872 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:52.872 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:52.872 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:52.872 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:52.872 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:52.872 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:52.872 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:52.872 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:52.872 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:52.872 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:53.813 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:03:53.813 21:25:44 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:54.754 21:25:45 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:54.754 21:25:45 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:54.754 21:25:45 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:54.754 21:25:45 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:54.754 21:25:45 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:54.754 21:25:45 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:54.754 21:25:45 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:54.754 21:25:45 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:54.754 21:25:45 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:54.754 21:25:45 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:54.754 21:25:45 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:84:00.0 00:03:54.754 21:25:45 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:55.694 Waiting for block devices as requested 00:03:55.694 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:03:55.953 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:03:55.953 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:03:55.953 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:03:56.213 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:03:56.213 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:03:56.213 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:03:56.474 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:03:56.474 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:03:56.474 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:03:56.474 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:03:56.734 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:03:56.734 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:03:56.734 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:03:56.992 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:03:56.992 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:03:56.992 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:03:56.992 21:25:47 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:56.992 21:25:47 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:84:00.0 00:03:56.992 21:25:47 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:56.992 21:25:47 -- common/autotest_common.sh@1502 -- # grep 0000:84:00.0/nvme/nvme 00:03:56.992 21:25:47 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:03:57.253 21:25:47 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 ]] 00:03:57.253 21:25:47 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:03:57.253 21:25:47 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:57.253 21:25:47 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:57.253 21:25:47 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:57.253 21:25:47 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:57.253 21:25:47 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:57.253 21:25:47 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:57.253 21:25:47 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:57.253 21:25:47 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:57.253 21:25:47 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:57.253 21:25:47 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:57.253 21:25:47 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:57.253 21:25:47 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:57.253 21:25:47 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:57.253 21:25:47 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:57.253 21:25:47 -- common/autotest_common.sh@1557 -- # continue 00:03:57.253 21:25:47 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:57.253 21:25:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:57.253 21:25:47 -- common/autotest_common.sh@10 -- # set +x 00:03:57.253 21:25:47 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:57.253 21:25:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:57.253 21:25:47 -- common/autotest_common.sh@10 -- # set +x 00:03:57.253 21:25:47 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:58.194 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:58.194 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:58.194 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:58.194 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:58.194 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:58.194 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:58.194 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:58.194 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:58.194 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:58.194 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:58.194 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:58.194 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:58.194 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:58.194 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:58.194 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:58.194 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:59.174 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:03:59.174 21:25:49 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:59.174 21:25:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:59.174 21:25:49 -- common/autotest_common.sh@10 -- # set +x 00:03:59.174 21:25:49 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:59.174 21:25:49 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:59.174 21:25:49 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:59.174 21:25:49 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:59.174 21:25:49 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:59.174 21:25:49 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:59.174 21:25:49 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:59.174 21:25:49 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:59.174 21:25:49 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:59.174 21:25:49 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:59.174 21:25:49 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:59.486 21:25:49 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:59.486 21:25:49 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:84:00.0 00:03:59.486 21:25:49 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:59.486 21:25:49 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:84:00.0/device 00:03:59.486 21:25:49 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:59.486 21:25:49 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:59.486 21:25:49 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:59.486 21:25:49 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:84:00.0 00:03:59.486 21:25:49 -- common/autotest_common.sh@1592 -- # [[ -z 0000:84:00.0 ]] 00:03:59.486 21:25:49 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=252583 00:03:59.487 21:25:49 -- common/autotest_common.sh@1598 -- # waitforlisten 252583 00:03:59.487 21:25:49 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:59.487 21:25:49 -- common/autotest_common.sh@829 -- # '[' -z 252583 ']' 00:03:59.487 21:25:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.487 21:25:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:59.487 21:25:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.487 21:25:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:59.487 21:25:49 -- common/autotest_common.sh@10 -- # set +x 00:03:59.487 [2024-07-15 21:25:50.032022] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:03:59.487 [2024-07-15 21:25:50.032133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid252583 ] 00:03:59.487 EAL: No free 2048 kB hugepages reported on node 1 00:03:59.487 [2024-07-15 21:25:50.092190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.487 [2024-07-15 21:25:50.212341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.778 21:25:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:59.778 21:25:50 -- common/autotest_common.sh@862 -- # return 0 00:03:59.778 21:25:50 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:59.778 21:25:50 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:59.778 21:25:50 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:84:00.0 00:04:03.100 nvme0n1 00:04:03.100 21:25:53 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:03.100 [2024-07-15 21:25:53.826507] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:03.100 [2024-07-15 21:25:53.826546] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:03.100 request: 00:04:03.100 { 00:04:03.100 "nvme_ctrlr_name": "nvme0", 00:04:03.100 "password": "test", 00:04:03.100 "method": "bdev_nvme_opal_revert", 00:04:03.100 "req_id": 1 00:04:03.100 } 00:04:03.100 Got JSON-RPC error response 00:04:03.100 response: 00:04:03.100 { 00:04:03.100 "code": -32603, 00:04:03.100 "message": "Internal error" 00:04:03.100 } 00:04:03.100 21:25:53 -- common/autotest_common.sh@1604 -- # true 00:04:03.100 21:25:53 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:03.100 21:25:53 -- common/autotest_common.sh@1608 -- # killprocess 252583 00:04:03.100 21:25:53 -- common/autotest_common.sh@948 -- # '[' -z 252583 ']' 00:04:03.100 21:25:53 -- common/autotest_common.sh@952 -- # kill -0 252583 00:04:03.100 21:25:53 -- common/autotest_common.sh@953 -- # uname 00:04:03.100 21:25:53 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:03.100 21:25:53 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 252583 00:04:03.100 21:25:53 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:03.100 21:25:53 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:03.100 21:25:53 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 252583' 00:04:03.100 killing process with pid 252583 00:04:03.100 21:25:53 -- common/autotest_common.sh@967 -- # kill 252583 00:04:03.100 21:25:53 -- common/autotest_common.sh@972 -- # wait 252583 00:04:04.995 21:25:55 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:04.995 21:25:55 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:04.995 21:25:55 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:04.995 21:25:55 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:04.995 21:25:55 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:04.995 21:25:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:04.995 21:25:55 -- common/autotest_common.sh@10 -- # set +x 00:04:04.995 21:25:55 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:04.995 21:25:55 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:04.995 21:25:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.995 21:25:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.995 21:25:55 -- common/autotest_common.sh@10 -- # set +x 00:04:04.995 ************************************ 00:04:04.995 START TEST env 00:04:04.995 ************************************ 00:04:04.995 21:25:55 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:04.995 * Looking for test storage... 00:04:04.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:04.995 21:25:55 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:04.995 21:25:55 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.995 21:25:55 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.995 21:25:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.995 ************************************ 00:04:04.995 START TEST env_memory 00:04:04.995 ************************************ 00:04:04.995 21:25:55 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:04.995 00:04:04.995 00:04:04.995 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.995 http://cunit.sourceforge.net/ 00:04:04.995 00:04:04.995 00:04:04.995 Suite: memory 00:04:04.995 Test: alloc and free memory map ...[2024-07-15 21:25:55.688222] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:04.995 passed 00:04:04.995 Test: mem map translation ...[2024-07-15 21:25:55.718645] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:04.995 [2024-07-15 21:25:55.718671] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:04.995 [2024-07-15 21:25:55.718725] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:04.995 [2024-07-15 21:25:55.718740] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:04.995 passed 00:04:04.995 Test: mem map registration ...[2024-07-15 21:25:55.783257] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:04.995 [2024-07-15 21:25:55.783286] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:05.253 passed 00:04:05.253 Test: mem map adjacent registrations ...passed 00:04:05.253 00:04:05.253 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.253 suites 1 1 n/a 0 0 00:04:05.253 tests 4 4 4 0 0 00:04:05.253 asserts 152 152 152 0 n/a 00:04:05.253 00:04:05.253 Elapsed time = 0.217 seconds 00:04:05.253 00:04:05.253 real 0m0.226s 00:04:05.253 user 0m0.213s 00:04:05.253 sys 0m0.012s 00:04:05.253 21:25:55 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.253 21:25:55 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:05.253 ************************************ 00:04:05.253 END TEST env_memory 00:04:05.253 ************************************ 00:04:05.253 21:25:55 env -- common/autotest_common.sh@1142 -- # return 0 00:04:05.253 21:25:55 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:05.253 21:25:55 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.253 21:25:55 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.253 21:25:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.253 ************************************ 00:04:05.253 START TEST env_vtophys 00:04:05.253 ************************************ 00:04:05.253 21:25:55 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:05.253 EAL: lib.eal log level changed from notice to debug 00:04:05.253 EAL: Detected lcore 0 as core 0 on socket 0 00:04:05.253 EAL: Detected lcore 1 as core 1 on socket 0 00:04:05.253 EAL: Detected lcore 2 as core 2 on socket 0 00:04:05.253 EAL: Detected lcore 3 as core 3 on socket 0 00:04:05.253 EAL: Detected lcore 4 as core 4 on socket 0 00:04:05.253 EAL: Detected lcore 5 as core 5 on socket 0 00:04:05.253 EAL: Detected lcore 6 as core 6 on socket 0 00:04:05.253 EAL: Detected lcore 7 as core 7 on socket 0 00:04:05.253 EAL: Detected lcore 8 as core 0 on socket 1 00:04:05.253 EAL: Detected lcore 9 as core 1 on socket 1 00:04:05.253 EAL: Detected lcore 10 as core 2 on socket 1 00:04:05.253 EAL: Detected lcore 11 as core 3 on socket 1 00:04:05.253 EAL: Detected lcore 12 as core 4 on socket 1 00:04:05.253 EAL: Detected lcore 13 as core 5 on socket 1 00:04:05.253 EAL: Detected lcore 14 as core 6 on socket 1 00:04:05.254 EAL: Detected lcore 15 as core 7 on socket 1 00:04:05.254 EAL: Detected lcore 16 as core 0 on socket 0 00:04:05.254 EAL: Detected lcore 17 as core 1 on socket 0 00:04:05.254 EAL: Detected lcore 18 as core 2 on socket 0 00:04:05.254 EAL: Detected lcore 19 as core 3 on socket 0 00:04:05.254 EAL: Detected lcore 20 as core 4 on socket 0 00:04:05.254 EAL: Detected lcore 21 as core 5 on socket 0 00:04:05.254 EAL: Detected lcore 22 as core 6 on socket 0 00:04:05.254 EAL: Detected lcore 23 as core 7 on socket 0 00:04:05.254 EAL: Detected lcore 24 as core 0 on socket 1 00:04:05.254 EAL: Detected lcore 25 as core 1 on socket 1 00:04:05.254 EAL: Detected lcore 26 as core 2 on socket 1 00:04:05.254 EAL: Detected lcore 27 as core 3 on socket 1 00:04:05.254 EAL: Detected lcore 28 as core 4 on socket 1 00:04:05.254 EAL: Detected lcore 29 as core 5 on socket 1 00:04:05.254 EAL: Detected lcore 30 as core 6 on socket 1 00:04:05.254 EAL: Detected lcore 31 as core 7 on socket 1 00:04:05.254 EAL: Maximum logical cores by configuration: 128 00:04:05.254 EAL: Detected CPU lcores: 32 00:04:05.254 EAL: Detected NUMA nodes: 2 00:04:05.254 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:05.254 EAL: Detected shared linkage of DPDK 00:04:05.254 EAL: No shared files mode enabled, IPC will be disabled 00:04:05.254 EAL: Bus pci wants IOVA as 'DC' 00:04:05.254 EAL: Buses did not request a specific IOVA mode. 00:04:05.254 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:05.254 EAL: Selected IOVA mode 'VA' 00:04:05.254 EAL: No free 2048 kB hugepages reported on node 1 00:04:05.254 EAL: Probing VFIO support... 00:04:05.254 EAL: IOMMU type 1 (Type 1) is supported 00:04:05.254 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:05.254 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:05.254 EAL: VFIO support initialized 00:04:05.254 EAL: Ask a virtual area of 0x2e000 bytes 00:04:05.254 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:05.254 EAL: Setting up physically contiguous memory... 00:04:05.254 EAL: Setting maximum number of open files to 524288 00:04:05.254 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:05.254 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:05.254 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:05.254 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.254 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:05.254 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.254 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.254 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:05.254 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:05.254 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.254 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:05.254 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.254 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.254 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:05.254 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:05.254 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.254 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:05.254 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.254 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.254 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:05.254 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:05.254 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.254 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:05.254 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.254 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.254 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:05.254 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:05.254 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:05.254 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.254 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:05.254 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:05.254 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.254 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:05.254 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:05.254 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.254 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:05.254 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:05.254 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.254 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:05.254 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:05.254 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.254 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:05.254 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:05.254 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.254 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:05.254 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:05.254 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.254 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:05.254 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:05.254 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.254 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:05.254 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:05.254 EAL: Hugepages will be freed exactly as allocated. 00:04:05.254 EAL: No shared files mode enabled, IPC is disabled 00:04:05.254 EAL: No shared files mode enabled, IPC is disabled 00:04:05.254 EAL: TSC frequency is ~2700000 KHz 00:04:05.254 EAL: Main lcore 0 is ready (tid=7f2045444a00;cpuset=[0]) 00:04:05.254 EAL: Trying to obtain current memory policy. 00:04:05.254 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.254 EAL: Restoring previous memory policy: 0 00:04:05.254 EAL: request: mp_malloc_sync 00:04:05.254 EAL: No shared files mode enabled, IPC is disabled 00:04:05.254 EAL: Heap on socket 0 was expanded by 2MB 00:04:05.254 EAL: No shared files mode enabled, IPC is disabled 00:04:05.254 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:05.254 EAL: Mem event callback 'spdk:(nil)' registered 00:04:05.254 00:04:05.254 00:04:05.254 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.254 http://cunit.sourceforge.net/ 00:04:05.254 00:04:05.254 00:04:05.254 Suite: components_suite 00:04:05.254 Test: vtophys_malloc_test ...passed 00:04:05.254 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:05.254 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.254 EAL: Restoring previous memory policy: 4 00:04:05.254 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.254 EAL: request: mp_malloc_sync 00:04:05.254 EAL: No shared files mode enabled, IPC is disabled 00:04:05.254 EAL: Heap on socket 0 was expanded by 4MB 00:04:05.254 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.254 EAL: request: mp_malloc_sync 00:04:05.254 EAL: No shared files mode enabled, IPC is disabled 00:04:05.254 EAL: Heap on socket 0 was shrunk by 4MB 00:04:05.254 EAL: Trying to obtain current memory policy. 00:04:05.254 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.254 EAL: Restoring previous memory policy: 4 00:04:05.254 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.254 EAL: request: mp_malloc_sync 00:04:05.254 EAL: No shared files mode enabled, IPC is disabled 00:04:05.254 EAL: Heap on socket 0 was expanded by 6MB 00:04:05.254 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.254 EAL: request: mp_malloc_sync 00:04:05.254 EAL: No shared files mode enabled, IPC is disabled 00:04:05.254 EAL: Heap on socket 0 was shrunk by 6MB 00:04:05.254 EAL: Trying to obtain current memory policy. 00:04:05.254 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.254 EAL: Restoring previous memory policy: 4 00:04:05.254 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.254 EAL: request: mp_malloc_sync 00:04:05.254 EAL: No shared files mode enabled, IPC is disabled 00:04:05.254 EAL: Heap on socket 0 was expanded by 10MB 00:04:05.254 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.254 EAL: request: mp_malloc_sync 00:04:05.254 EAL: No shared files mode enabled, IPC is disabled 00:04:05.254 EAL: Heap on socket 0 was shrunk by 10MB 00:04:05.254 EAL: Trying to obtain current memory policy. 00:04:05.254 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.254 EAL: Restoring previous memory policy: 4 00:04:05.254 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.254 EAL: request: mp_malloc_sync 00:04:05.254 EAL: No shared files mode enabled, IPC is disabled 00:04:05.254 EAL: Heap on socket 0 was expanded by 18MB 00:04:05.254 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.254 EAL: request: mp_malloc_sync 00:04:05.254 EAL: No shared files mode enabled, IPC is disabled 00:04:05.254 EAL: Heap on socket 0 was shrunk by 18MB 00:04:05.254 EAL: Trying to obtain current memory policy. 00:04:05.254 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.254 EAL: Restoring previous memory policy: 4 00:04:05.254 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.254 EAL: request: mp_malloc_sync 00:04:05.254 EAL: No shared files mode enabled, IPC is disabled 00:04:05.254 EAL: Heap on socket 0 was expanded by 34MB 00:04:05.254 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.254 EAL: request: mp_malloc_sync 00:04:05.254 EAL: No shared files mode enabled, IPC is disabled 00:04:05.254 EAL: Heap on socket 0 was shrunk by 34MB 00:04:05.254 EAL: Trying to obtain current memory policy. 00:04:05.254 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.254 EAL: Restoring previous memory policy: 4 00:04:05.254 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.254 EAL: request: mp_malloc_sync 00:04:05.254 EAL: No shared files mode enabled, IPC is disabled 00:04:05.254 EAL: Heap on socket 0 was expanded by 66MB 00:04:05.254 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.254 EAL: request: mp_malloc_sync 00:04:05.254 EAL: No shared files mode enabled, IPC is disabled 00:04:05.254 EAL: Heap on socket 0 was shrunk by 66MB 00:04:05.254 EAL: Trying to obtain current memory policy. 00:04:05.254 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.514 EAL: Restoring previous memory policy: 4 00:04:05.514 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.514 EAL: request: mp_malloc_sync 00:04:05.514 EAL: No shared files mode enabled, IPC is disabled 00:04:05.514 EAL: Heap on socket 0 was expanded by 130MB 00:04:05.514 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.514 EAL: request: mp_malloc_sync 00:04:05.514 EAL: No shared files mode enabled, IPC is disabled 00:04:05.514 EAL: Heap on socket 0 was shrunk by 130MB 00:04:05.514 EAL: Trying to obtain current memory policy. 00:04:05.514 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.514 EAL: Restoring previous memory policy: 4 00:04:05.514 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.514 EAL: request: mp_malloc_sync 00:04:05.514 EAL: No shared files mode enabled, IPC is disabled 00:04:05.514 EAL: Heap on socket 0 was expanded by 258MB 00:04:05.514 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.514 EAL: request: mp_malloc_sync 00:04:05.514 EAL: No shared files mode enabled, IPC is disabled 00:04:05.514 EAL: Heap on socket 0 was shrunk by 258MB 00:04:05.514 EAL: Trying to obtain current memory policy. 00:04:05.514 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.772 EAL: Restoring previous memory policy: 4 00:04:05.772 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.772 EAL: request: mp_malloc_sync 00:04:05.772 EAL: No shared files mode enabled, IPC is disabled 00:04:05.772 EAL: Heap on socket 0 was expanded by 514MB 00:04:05.772 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.772 EAL: request: mp_malloc_sync 00:04:05.772 EAL: No shared files mode enabled, IPC is disabled 00:04:05.772 EAL: Heap on socket 0 was shrunk by 514MB 00:04:05.772 EAL: Trying to obtain current memory policy. 00:04:05.772 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.029 EAL: Restoring previous memory policy: 4 00:04:06.029 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.029 EAL: request: mp_malloc_sync 00:04:06.029 EAL: No shared files mode enabled, IPC is disabled 00:04:06.029 EAL: Heap on socket 0 was expanded by 1026MB 00:04:06.029 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.286 EAL: request: mp_malloc_sync 00:04:06.286 EAL: No shared files mode enabled, IPC is disabled 00:04:06.286 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:06.286 passed 00:04:06.286 00:04:06.286 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.286 suites 1 1 n/a 0 0 00:04:06.286 tests 2 2 2 0 0 00:04:06.286 asserts 497 497 497 0 n/a 00:04:06.286 00:04:06.286 Elapsed time = 0.839 seconds 00:04:06.286 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.286 EAL: request: mp_malloc_sync 00:04:06.286 EAL: No shared files mode enabled, IPC is disabled 00:04:06.286 EAL: Heap on socket 0 was shrunk by 2MB 00:04:06.286 EAL: No shared files mode enabled, IPC is disabled 00:04:06.286 EAL: No shared files mode enabled, IPC is disabled 00:04:06.286 EAL: No shared files mode enabled, IPC is disabled 00:04:06.286 00:04:06.286 real 0m0.949s 00:04:06.286 user 0m0.448s 00:04:06.286 sys 0m0.465s 00:04:06.286 21:25:56 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.286 21:25:56 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:06.286 ************************************ 00:04:06.286 END TEST env_vtophys 00:04:06.286 ************************************ 00:04:06.286 21:25:56 env -- common/autotest_common.sh@1142 -- # return 0 00:04:06.286 21:25:56 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:06.286 21:25:56 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.286 21:25:56 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.286 21:25:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.286 ************************************ 00:04:06.286 START TEST env_pci 00:04:06.286 ************************************ 00:04:06.286 21:25:56 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:06.286 00:04:06.286 00:04:06.286 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.286 http://cunit.sourceforge.net/ 00:04:06.286 00:04:06.286 00:04:06.286 Suite: pci 00:04:06.286 Test: pci_hook ...[2024-07-15 21:25:56.942942] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 253275 has claimed it 00:04:06.286 EAL: Cannot find device (10000:00:01.0) 00:04:06.286 EAL: Failed to attach device on primary process 00:04:06.286 passed 00:04:06.286 00:04:06.286 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.286 suites 1 1 n/a 0 0 00:04:06.286 tests 1 1 1 0 0 00:04:06.286 asserts 25 25 25 0 n/a 00:04:06.286 00:04:06.286 Elapsed time = 0.017 seconds 00:04:06.286 00:04:06.286 real 0m0.030s 00:04:06.286 user 0m0.013s 00:04:06.286 sys 0m0.017s 00:04:06.286 21:25:56 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.286 21:25:56 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:06.286 ************************************ 00:04:06.286 END TEST env_pci 00:04:06.286 ************************************ 00:04:06.286 21:25:56 env -- common/autotest_common.sh@1142 -- # return 0 00:04:06.286 21:25:56 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:06.286 21:25:56 env -- env/env.sh@15 -- # uname 00:04:06.286 21:25:56 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:06.286 21:25:56 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:06.286 21:25:56 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:06.286 21:25:56 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:06.286 21:25:56 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.286 21:25:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.286 ************************************ 00:04:06.286 START TEST env_dpdk_post_init 00:04:06.286 ************************************ 00:04:06.286 21:25:57 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:06.286 EAL: Detected CPU lcores: 32 00:04:06.286 EAL: Detected NUMA nodes: 2 00:04:06.286 EAL: Detected shared linkage of DPDK 00:04:06.286 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:06.286 EAL: Selected IOVA mode 'VA' 00:04:06.286 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.286 EAL: VFIO support initialized 00:04:06.286 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:06.545 EAL: Using IOMMU type 1 (Type 1) 00:04:06.545 EAL: Probe PCI driver: spdk_ioat (8086:3c20) device: 0000:00:04.0 (socket 0) 00:04:06.545 EAL: Probe PCI driver: spdk_ioat (8086:3c21) device: 0000:00:04.1 (socket 0) 00:04:06.545 EAL: Probe PCI driver: spdk_ioat (8086:3c22) device: 0000:00:04.2 (socket 0) 00:04:06.545 EAL: Probe PCI driver: spdk_ioat (8086:3c23) device: 0000:00:04.3 (socket 0) 00:04:06.545 EAL: Probe PCI driver: spdk_ioat (8086:3c24) device: 0000:00:04.4 (socket 0) 00:04:06.545 EAL: Probe PCI driver: spdk_ioat (8086:3c25) device: 0000:00:04.5 (socket 0) 00:04:06.545 EAL: Probe PCI driver: spdk_ioat (8086:3c26) device: 0000:00:04.6 (socket 0) 00:04:06.545 EAL: Probe PCI driver: spdk_ioat (8086:3c27) device: 0000:00:04.7 (socket 0) 00:04:06.545 EAL: Probe PCI driver: spdk_ioat (8086:3c20) device: 0000:80:04.0 (socket 1) 00:04:06.545 EAL: Probe PCI driver: spdk_ioat (8086:3c21) device: 0000:80:04.1 (socket 1) 00:04:06.545 EAL: Probe PCI driver: spdk_ioat (8086:3c22) device: 0000:80:04.2 (socket 1) 00:04:06.545 EAL: Probe PCI driver: spdk_ioat (8086:3c23) device: 0000:80:04.3 (socket 1) 00:04:06.545 EAL: Probe PCI driver: spdk_ioat (8086:3c24) device: 0000:80:04.4 (socket 1) 00:04:06.545 EAL: Probe PCI driver: spdk_ioat (8086:3c25) device: 0000:80:04.5 (socket 1) 00:04:06.545 EAL: Probe PCI driver: spdk_ioat (8086:3c26) device: 0000:80:04.6 (socket 1) 00:04:06.545 EAL: Probe PCI driver: spdk_ioat (8086:3c27) device: 0000:80:04.7 (socket 1) 00:04:07.478 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:84:00.0 (socket 1) 00:04:10.752 EAL: Releasing PCI mapped resource for 0000:84:00.0 00:04:10.752 EAL: Calling pci_unmap_resource for 0000:84:00.0 at 0x202001040000 00:04:10.752 Starting DPDK initialization... 00:04:10.752 Starting SPDK post initialization... 00:04:10.752 SPDK NVMe probe 00:04:10.752 Attaching to 0000:84:00.0 00:04:10.752 Attached to 0000:84:00.0 00:04:10.752 Cleaning up... 00:04:10.752 00:04:10.752 real 0m4.369s 00:04:10.752 user 0m3.270s 00:04:10.752 sys 0m0.166s 00:04:10.752 21:26:01 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.752 21:26:01 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:10.752 ************************************ 00:04:10.752 END TEST env_dpdk_post_init 00:04:10.752 ************************************ 00:04:10.752 21:26:01 env -- common/autotest_common.sh@1142 -- # return 0 00:04:10.752 21:26:01 env -- env/env.sh@26 -- # uname 00:04:10.752 21:26:01 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:10.752 21:26:01 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:10.752 21:26:01 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.752 21:26:01 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.752 21:26:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.752 ************************************ 00:04:10.752 START TEST env_mem_callbacks 00:04:10.752 ************************************ 00:04:10.752 21:26:01 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:10.752 EAL: Detected CPU lcores: 32 00:04:10.752 EAL: Detected NUMA nodes: 2 00:04:10.752 EAL: Detected shared linkage of DPDK 00:04:10.752 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:10.752 EAL: Selected IOVA mode 'VA' 00:04:10.752 EAL: No free 2048 kB hugepages reported on node 1 00:04:10.752 EAL: VFIO support initialized 00:04:10.752 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:10.752 00:04:10.752 00:04:10.752 CUnit - A unit testing framework for C - Version 2.1-3 00:04:10.752 http://cunit.sourceforge.net/ 00:04:10.752 00:04:10.752 00:04:10.752 Suite: memory 00:04:10.752 Test: test ... 00:04:10.752 register 0x200000200000 2097152 00:04:10.752 malloc 3145728 00:04:10.752 register 0x200000400000 4194304 00:04:10.752 buf 0x200000500000 len 3145728 PASSED 00:04:10.752 malloc 64 00:04:10.752 buf 0x2000004fff40 len 64 PASSED 00:04:10.752 malloc 4194304 00:04:10.752 register 0x200000800000 6291456 00:04:10.752 buf 0x200000a00000 len 4194304 PASSED 00:04:10.752 free 0x200000500000 3145728 00:04:10.752 free 0x2000004fff40 64 00:04:10.752 unregister 0x200000400000 4194304 PASSED 00:04:10.752 free 0x200000a00000 4194304 00:04:10.752 unregister 0x200000800000 6291456 PASSED 00:04:10.752 malloc 8388608 00:04:10.752 register 0x200000400000 10485760 00:04:10.752 buf 0x200000600000 len 8388608 PASSED 00:04:10.752 free 0x200000600000 8388608 00:04:10.752 unregister 0x200000400000 10485760 PASSED 00:04:10.752 passed 00:04:10.752 00:04:10.752 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.752 suites 1 1 n/a 0 0 00:04:10.752 tests 1 1 1 0 0 00:04:10.752 asserts 15 15 15 0 n/a 00:04:10.752 00:04:10.752 Elapsed time = 0.005 seconds 00:04:10.752 00:04:10.752 real 0m0.044s 00:04:10.752 user 0m0.013s 00:04:10.752 sys 0m0.030s 00:04:10.752 21:26:01 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.752 21:26:01 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:10.752 ************************************ 00:04:10.752 END TEST env_mem_callbacks 00:04:10.752 ************************************ 00:04:10.752 21:26:01 env -- common/autotest_common.sh@1142 -- # return 0 00:04:10.752 00:04:10.752 real 0m5.948s 00:04:10.752 user 0m4.081s 00:04:10.752 sys 0m0.912s 00:04:10.752 21:26:01 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.752 21:26:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.752 ************************************ 00:04:10.752 END TEST env 00:04:10.752 ************************************ 00:04:11.010 21:26:01 -- common/autotest_common.sh@1142 -- # return 0 00:04:11.010 21:26:01 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:11.010 21:26:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.010 21:26:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.010 21:26:01 -- common/autotest_common.sh@10 -- # set +x 00:04:11.010 ************************************ 00:04:11.010 START TEST rpc 00:04:11.010 ************************************ 00:04:11.010 21:26:01 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:11.010 * Looking for test storage... 00:04:11.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:11.010 21:26:01 rpc -- rpc/rpc.sh@65 -- # spdk_pid=253804 00:04:11.010 21:26:01 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.010 21:26:01 rpc -- rpc/rpc.sh@67 -- # waitforlisten 253804 00:04:11.010 21:26:01 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:11.010 21:26:01 rpc -- common/autotest_common.sh@829 -- # '[' -z 253804 ']' 00:04:11.010 21:26:01 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.010 21:26:01 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:11.010 21:26:01 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.010 21:26:01 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:11.010 21:26:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.010 [2024-07-15 21:26:01.683259] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:04:11.010 [2024-07-15 21:26:01.683350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid253804 ] 00:04:11.010 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.010 [2024-07-15 21:26:01.743278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.268 [2024-07-15 21:26:01.860245] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:11.268 [2024-07-15 21:26:01.860295] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 253804' to capture a snapshot of events at runtime. 00:04:11.268 [2024-07-15 21:26:01.860311] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:11.268 [2024-07-15 21:26:01.860324] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:11.268 [2024-07-15 21:26:01.860339] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid253804 for offline analysis/debug. 00:04:11.268 [2024-07-15 21:26:01.860369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.526 21:26:02 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:11.526 21:26:02 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:11.526 21:26:02 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:11.526 21:26:02 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:11.526 21:26:02 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:11.526 21:26:02 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:11.526 21:26:02 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.526 21:26:02 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.526 21:26:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.526 ************************************ 00:04:11.526 START TEST rpc_integrity 00:04:11.526 ************************************ 00:04:11.526 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:11.526 21:26:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:11.526 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.526 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.526 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.526 21:26:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:11.526 21:26:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:11.526 21:26:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:11.526 21:26:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:11.526 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.526 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.526 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.526 21:26:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:11.526 21:26:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:11.526 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.526 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.526 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.526 21:26:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:11.526 { 00:04:11.526 "name": "Malloc0", 00:04:11.526 "aliases": [ 00:04:11.526 "167f8a90-f481-4a59-800c-2f03b47bee79" 00:04:11.526 ], 00:04:11.526 "product_name": "Malloc disk", 00:04:11.526 "block_size": 512, 00:04:11.526 "num_blocks": 16384, 00:04:11.526 "uuid": "167f8a90-f481-4a59-800c-2f03b47bee79", 00:04:11.526 "assigned_rate_limits": { 00:04:11.526 "rw_ios_per_sec": 0, 00:04:11.526 "rw_mbytes_per_sec": 0, 00:04:11.526 "r_mbytes_per_sec": 0, 00:04:11.526 "w_mbytes_per_sec": 0 00:04:11.526 }, 00:04:11.526 "claimed": false, 00:04:11.526 "zoned": false, 00:04:11.526 "supported_io_types": { 00:04:11.526 "read": true, 00:04:11.526 "write": true, 00:04:11.526 "unmap": true, 00:04:11.526 "flush": true, 00:04:11.526 "reset": true, 00:04:11.526 "nvme_admin": false, 00:04:11.526 "nvme_io": false, 00:04:11.526 "nvme_io_md": false, 00:04:11.526 "write_zeroes": true, 00:04:11.526 "zcopy": true, 00:04:11.526 "get_zone_info": false, 00:04:11.526 "zone_management": false, 00:04:11.526 "zone_append": false, 00:04:11.526 "compare": false, 00:04:11.526 "compare_and_write": false, 00:04:11.526 "abort": true, 00:04:11.526 "seek_hole": false, 00:04:11.526 "seek_data": false, 00:04:11.526 "copy": true, 00:04:11.526 "nvme_iov_md": false 00:04:11.526 }, 00:04:11.526 "memory_domains": [ 00:04:11.526 { 00:04:11.526 "dma_device_id": "system", 00:04:11.526 "dma_device_type": 1 00:04:11.526 }, 00:04:11.526 { 00:04:11.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.526 "dma_device_type": 2 00:04:11.526 } 00:04:11.526 ], 00:04:11.526 "driver_specific": {} 00:04:11.526 } 00:04:11.526 ]' 00:04:11.526 21:26:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:11.526 21:26:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:11.526 21:26:02 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:11.526 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.526 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.527 [2024-07-15 21:26:02.232789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:11.527 [2024-07-15 21:26:02.232834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:11.527 [2024-07-15 21:26:02.232857] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xcd0380 00:04:11.527 [2024-07-15 21:26:02.232873] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:11.527 [2024-07-15 21:26:02.234424] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:11.527 [2024-07-15 21:26:02.234450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:11.527 Passthru0 00:04:11.527 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.527 21:26:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:11.527 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.527 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.527 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.527 21:26:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:11.527 { 00:04:11.527 "name": "Malloc0", 00:04:11.527 "aliases": [ 00:04:11.527 "167f8a90-f481-4a59-800c-2f03b47bee79" 00:04:11.527 ], 00:04:11.527 "product_name": "Malloc disk", 00:04:11.527 "block_size": 512, 00:04:11.527 "num_blocks": 16384, 00:04:11.527 "uuid": "167f8a90-f481-4a59-800c-2f03b47bee79", 00:04:11.527 "assigned_rate_limits": { 00:04:11.527 "rw_ios_per_sec": 0, 00:04:11.527 "rw_mbytes_per_sec": 0, 00:04:11.527 "r_mbytes_per_sec": 0, 00:04:11.527 "w_mbytes_per_sec": 0 00:04:11.527 }, 00:04:11.527 "claimed": true, 00:04:11.527 "claim_type": "exclusive_write", 00:04:11.527 "zoned": false, 00:04:11.527 "supported_io_types": { 00:04:11.527 "read": true, 00:04:11.527 "write": true, 00:04:11.527 "unmap": true, 00:04:11.527 "flush": true, 00:04:11.527 "reset": true, 00:04:11.527 "nvme_admin": false, 00:04:11.527 "nvme_io": false, 00:04:11.527 "nvme_io_md": false, 00:04:11.527 "write_zeroes": true, 00:04:11.527 "zcopy": true, 00:04:11.527 "get_zone_info": false, 00:04:11.527 "zone_management": false, 00:04:11.527 "zone_append": false, 00:04:11.527 "compare": false, 00:04:11.527 "compare_and_write": false, 00:04:11.527 "abort": true, 00:04:11.527 "seek_hole": false, 00:04:11.527 "seek_data": false, 00:04:11.527 "copy": true, 00:04:11.527 "nvme_iov_md": false 00:04:11.527 }, 00:04:11.527 "memory_domains": [ 00:04:11.527 { 00:04:11.527 "dma_device_id": "system", 00:04:11.527 "dma_device_type": 1 00:04:11.527 }, 00:04:11.527 { 00:04:11.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.527 "dma_device_type": 2 00:04:11.527 } 00:04:11.527 ], 00:04:11.527 "driver_specific": {} 00:04:11.527 }, 00:04:11.527 { 00:04:11.527 "name": "Passthru0", 00:04:11.527 "aliases": [ 00:04:11.527 "e40eefb5-164f-57a7-a31d-f6b64804cb2c" 00:04:11.527 ], 00:04:11.527 "product_name": "passthru", 00:04:11.527 "block_size": 512, 00:04:11.527 "num_blocks": 16384, 00:04:11.527 "uuid": "e40eefb5-164f-57a7-a31d-f6b64804cb2c", 00:04:11.527 "assigned_rate_limits": { 00:04:11.527 "rw_ios_per_sec": 0, 00:04:11.527 "rw_mbytes_per_sec": 0, 00:04:11.527 "r_mbytes_per_sec": 0, 00:04:11.527 "w_mbytes_per_sec": 0 00:04:11.527 }, 00:04:11.527 "claimed": false, 00:04:11.527 "zoned": false, 00:04:11.527 "supported_io_types": { 00:04:11.527 "read": true, 00:04:11.527 "write": true, 00:04:11.527 "unmap": true, 00:04:11.527 "flush": true, 00:04:11.527 "reset": true, 00:04:11.527 "nvme_admin": false, 00:04:11.527 "nvme_io": false, 00:04:11.527 "nvme_io_md": false, 00:04:11.527 "write_zeroes": true, 00:04:11.527 "zcopy": true, 00:04:11.527 "get_zone_info": false, 00:04:11.527 "zone_management": false, 00:04:11.527 "zone_append": false, 00:04:11.527 "compare": false, 00:04:11.527 "compare_and_write": false, 00:04:11.527 "abort": true, 00:04:11.527 "seek_hole": false, 00:04:11.527 "seek_data": false, 00:04:11.527 "copy": true, 00:04:11.527 "nvme_iov_md": false 00:04:11.527 }, 00:04:11.527 "memory_domains": [ 00:04:11.527 { 00:04:11.527 "dma_device_id": "system", 00:04:11.527 "dma_device_type": 1 00:04:11.527 }, 00:04:11.527 { 00:04:11.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.527 "dma_device_type": 2 00:04:11.527 } 00:04:11.527 ], 00:04:11.527 "driver_specific": { 00:04:11.527 "passthru": { 00:04:11.527 "name": "Passthru0", 00:04:11.527 "base_bdev_name": "Malloc0" 00:04:11.527 } 00:04:11.527 } 00:04:11.527 } 00:04:11.527 ]' 00:04:11.527 21:26:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:11.527 21:26:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:11.527 21:26:02 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:11.527 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.527 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.527 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.527 21:26:02 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:11.527 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.527 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.527 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.527 21:26:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:11.527 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.527 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.785 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.785 21:26:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:11.785 21:26:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:11.785 21:26:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:11.785 00:04:11.785 real 0m0.256s 00:04:11.785 user 0m0.163s 00:04:11.785 sys 0m0.029s 00:04:11.785 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.785 21:26:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.785 ************************************ 00:04:11.785 END TEST rpc_integrity 00:04:11.785 ************************************ 00:04:11.785 21:26:02 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:11.785 21:26:02 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:11.785 21:26:02 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.785 21:26:02 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.785 21:26:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.785 ************************************ 00:04:11.785 START TEST rpc_plugins 00:04:11.785 ************************************ 00:04:11.785 21:26:02 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:11.785 21:26:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:11.785 21:26:02 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.785 21:26:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.785 21:26:02 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.785 21:26:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:11.785 21:26:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:11.785 21:26:02 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.785 21:26:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.785 21:26:02 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.785 21:26:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:11.785 { 00:04:11.785 "name": "Malloc1", 00:04:11.785 "aliases": [ 00:04:11.785 "0c34698c-96e1-4f61-a29a-93fae75aabd6" 00:04:11.785 ], 00:04:11.785 "product_name": "Malloc disk", 00:04:11.785 "block_size": 4096, 00:04:11.785 "num_blocks": 256, 00:04:11.785 "uuid": "0c34698c-96e1-4f61-a29a-93fae75aabd6", 00:04:11.785 "assigned_rate_limits": { 00:04:11.785 "rw_ios_per_sec": 0, 00:04:11.785 "rw_mbytes_per_sec": 0, 00:04:11.785 "r_mbytes_per_sec": 0, 00:04:11.785 "w_mbytes_per_sec": 0 00:04:11.785 }, 00:04:11.785 "claimed": false, 00:04:11.785 "zoned": false, 00:04:11.785 "supported_io_types": { 00:04:11.785 "read": true, 00:04:11.785 "write": true, 00:04:11.785 "unmap": true, 00:04:11.785 "flush": true, 00:04:11.785 "reset": true, 00:04:11.785 "nvme_admin": false, 00:04:11.785 "nvme_io": false, 00:04:11.785 "nvme_io_md": false, 00:04:11.785 "write_zeroes": true, 00:04:11.785 "zcopy": true, 00:04:11.785 "get_zone_info": false, 00:04:11.785 "zone_management": false, 00:04:11.785 "zone_append": false, 00:04:11.785 "compare": false, 00:04:11.785 "compare_and_write": false, 00:04:11.785 "abort": true, 00:04:11.785 "seek_hole": false, 00:04:11.785 "seek_data": false, 00:04:11.785 "copy": true, 00:04:11.785 "nvme_iov_md": false 00:04:11.785 }, 00:04:11.785 "memory_domains": [ 00:04:11.785 { 00:04:11.785 "dma_device_id": "system", 00:04:11.785 "dma_device_type": 1 00:04:11.785 }, 00:04:11.785 { 00:04:11.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.785 "dma_device_type": 2 00:04:11.785 } 00:04:11.785 ], 00:04:11.785 "driver_specific": {} 00:04:11.785 } 00:04:11.785 ]' 00:04:11.785 21:26:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:11.785 21:26:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:11.785 21:26:02 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:11.785 21:26:02 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.785 21:26:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.785 21:26:02 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.785 21:26:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:11.785 21:26:02 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.785 21:26:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.785 21:26:02 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.785 21:26:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:11.785 21:26:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:11.785 21:26:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:11.785 00:04:11.785 real 0m0.130s 00:04:11.785 user 0m0.078s 00:04:11.785 sys 0m0.018s 00:04:11.785 21:26:02 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.785 21:26:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.785 ************************************ 00:04:11.785 END TEST rpc_plugins 00:04:11.785 ************************************ 00:04:11.785 21:26:02 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:11.785 21:26:02 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:11.785 21:26:02 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.785 21:26:02 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.785 21:26:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.043 ************************************ 00:04:12.043 START TEST rpc_trace_cmd_test 00:04:12.043 ************************************ 00:04:12.043 21:26:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:12.043 21:26:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:12.043 21:26:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:12.043 21:26:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.043 21:26:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:12.043 21:26:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.043 21:26:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:12.043 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid253804", 00:04:12.043 "tpoint_group_mask": "0x8", 00:04:12.043 "iscsi_conn": { 00:04:12.043 "mask": "0x2", 00:04:12.043 "tpoint_mask": "0x0" 00:04:12.043 }, 00:04:12.043 "scsi": { 00:04:12.043 "mask": "0x4", 00:04:12.043 "tpoint_mask": "0x0" 00:04:12.043 }, 00:04:12.043 "bdev": { 00:04:12.043 "mask": "0x8", 00:04:12.043 "tpoint_mask": "0xffffffffffffffff" 00:04:12.043 }, 00:04:12.043 "nvmf_rdma": { 00:04:12.043 "mask": "0x10", 00:04:12.043 "tpoint_mask": "0x0" 00:04:12.043 }, 00:04:12.043 "nvmf_tcp": { 00:04:12.043 "mask": "0x20", 00:04:12.043 "tpoint_mask": "0x0" 00:04:12.043 }, 00:04:12.043 "ftl": { 00:04:12.043 "mask": "0x40", 00:04:12.043 "tpoint_mask": "0x0" 00:04:12.043 }, 00:04:12.043 "blobfs": { 00:04:12.043 "mask": "0x80", 00:04:12.043 "tpoint_mask": "0x0" 00:04:12.043 }, 00:04:12.043 "dsa": { 00:04:12.043 "mask": "0x200", 00:04:12.043 "tpoint_mask": "0x0" 00:04:12.043 }, 00:04:12.043 "thread": { 00:04:12.043 "mask": "0x400", 00:04:12.043 "tpoint_mask": "0x0" 00:04:12.043 }, 00:04:12.043 "nvme_pcie": { 00:04:12.043 "mask": "0x800", 00:04:12.043 "tpoint_mask": "0x0" 00:04:12.043 }, 00:04:12.043 "iaa": { 00:04:12.043 "mask": "0x1000", 00:04:12.043 "tpoint_mask": "0x0" 00:04:12.043 }, 00:04:12.043 "nvme_tcp": { 00:04:12.043 "mask": "0x2000", 00:04:12.043 "tpoint_mask": "0x0" 00:04:12.043 }, 00:04:12.043 "bdev_nvme": { 00:04:12.043 "mask": "0x4000", 00:04:12.043 "tpoint_mask": "0x0" 00:04:12.043 }, 00:04:12.043 "sock": { 00:04:12.043 "mask": "0x8000", 00:04:12.043 "tpoint_mask": "0x0" 00:04:12.043 } 00:04:12.043 }' 00:04:12.043 21:26:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:12.043 21:26:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:12.043 21:26:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:12.043 21:26:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:12.043 21:26:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:12.043 21:26:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:12.043 21:26:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:12.043 21:26:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:12.043 21:26:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:12.043 21:26:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:12.043 00:04:12.043 real 0m0.213s 00:04:12.043 user 0m0.184s 00:04:12.043 sys 0m0.020s 00:04:12.043 21:26:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.043 21:26:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:12.043 ************************************ 00:04:12.043 END TEST rpc_trace_cmd_test 00:04:12.043 ************************************ 00:04:12.043 21:26:02 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:12.043 21:26:02 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:12.043 21:26:02 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:12.043 21:26:02 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:12.043 21:26:02 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.043 21:26:02 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.043 21:26:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.302 ************************************ 00:04:12.302 START TEST rpc_daemon_integrity 00:04:12.302 ************************************ 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:12.302 { 00:04:12.302 "name": "Malloc2", 00:04:12.302 "aliases": [ 00:04:12.302 "85f13b36-2232-4377-9551-23f274b19f59" 00:04:12.302 ], 00:04:12.302 "product_name": "Malloc disk", 00:04:12.302 "block_size": 512, 00:04:12.302 "num_blocks": 16384, 00:04:12.302 "uuid": "85f13b36-2232-4377-9551-23f274b19f59", 00:04:12.302 "assigned_rate_limits": { 00:04:12.302 "rw_ios_per_sec": 0, 00:04:12.302 "rw_mbytes_per_sec": 0, 00:04:12.302 "r_mbytes_per_sec": 0, 00:04:12.302 "w_mbytes_per_sec": 0 00:04:12.302 }, 00:04:12.302 "claimed": false, 00:04:12.302 "zoned": false, 00:04:12.302 "supported_io_types": { 00:04:12.302 "read": true, 00:04:12.302 "write": true, 00:04:12.302 "unmap": true, 00:04:12.302 "flush": true, 00:04:12.302 "reset": true, 00:04:12.302 "nvme_admin": false, 00:04:12.302 "nvme_io": false, 00:04:12.302 "nvme_io_md": false, 00:04:12.302 "write_zeroes": true, 00:04:12.302 "zcopy": true, 00:04:12.302 "get_zone_info": false, 00:04:12.302 "zone_management": false, 00:04:12.302 "zone_append": false, 00:04:12.302 "compare": false, 00:04:12.302 "compare_and_write": false, 00:04:12.302 "abort": true, 00:04:12.302 "seek_hole": false, 00:04:12.302 "seek_data": false, 00:04:12.302 "copy": true, 00:04:12.302 "nvme_iov_md": false 00:04:12.302 }, 00:04:12.302 "memory_domains": [ 00:04:12.302 { 00:04:12.302 "dma_device_id": "system", 00:04:12.302 "dma_device_type": 1 00:04:12.302 }, 00:04:12.302 { 00:04:12.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.302 "dma_device_type": 2 00:04:12.302 } 00:04:12.302 ], 00:04:12.302 "driver_specific": {} 00:04:12.302 } 00:04:12.302 ]' 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.302 [2024-07-15 21:26:02.978901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:12.302 [2024-07-15 21:26:02.978944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:12.302 [2024-07-15 21:26:02.978967] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb1f690 00:04:12.302 [2024-07-15 21:26:02.978983] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:12.302 [2024-07-15 21:26:02.980420] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:12.302 [2024-07-15 21:26:02.980446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:12.302 Passthru0 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.302 21:26:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:12.302 { 00:04:12.302 "name": "Malloc2", 00:04:12.302 "aliases": [ 00:04:12.302 "85f13b36-2232-4377-9551-23f274b19f59" 00:04:12.302 ], 00:04:12.302 "product_name": "Malloc disk", 00:04:12.302 "block_size": 512, 00:04:12.302 "num_blocks": 16384, 00:04:12.302 "uuid": "85f13b36-2232-4377-9551-23f274b19f59", 00:04:12.302 "assigned_rate_limits": { 00:04:12.302 "rw_ios_per_sec": 0, 00:04:12.302 "rw_mbytes_per_sec": 0, 00:04:12.302 "r_mbytes_per_sec": 0, 00:04:12.302 "w_mbytes_per_sec": 0 00:04:12.302 }, 00:04:12.302 "claimed": true, 00:04:12.302 "claim_type": "exclusive_write", 00:04:12.302 "zoned": false, 00:04:12.302 "supported_io_types": { 00:04:12.302 "read": true, 00:04:12.302 "write": true, 00:04:12.302 "unmap": true, 00:04:12.302 "flush": true, 00:04:12.302 "reset": true, 00:04:12.302 "nvme_admin": false, 00:04:12.302 "nvme_io": false, 00:04:12.302 "nvme_io_md": false, 00:04:12.302 "write_zeroes": true, 00:04:12.302 "zcopy": true, 00:04:12.302 "get_zone_info": false, 00:04:12.302 "zone_management": false, 00:04:12.302 "zone_append": false, 00:04:12.302 "compare": false, 00:04:12.302 "compare_and_write": false, 00:04:12.302 "abort": true, 00:04:12.302 "seek_hole": false, 00:04:12.302 "seek_data": false, 00:04:12.302 "copy": true, 00:04:12.302 "nvme_iov_md": false 00:04:12.302 }, 00:04:12.302 "memory_domains": [ 00:04:12.302 { 00:04:12.302 "dma_device_id": "system", 00:04:12.302 "dma_device_type": 1 00:04:12.302 }, 00:04:12.302 { 00:04:12.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.302 "dma_device_type": 2 00:04:12.302 } 00:04:12.302 ], 00:04:12.302 "driver_specific": {} 00:04:12.302 }, 00:04:12.302 { 00:04:12.302 "name": "Passthru0", 00:04:12.302 "aliases": [ 00:04:12.303 "2eed31f1-f857-5028-8441-e92252f4a3ce" 00:04:12.303 ], 00:04:12.303 "product_name": "passthru", 00:04:12.303 "block_size": 512, 00:04:12.303 "num_blocks": 16384, 00:04:12.303 "uuid": "2eed31f1-f857-5028-8441-e92252f4a3ce", 00:04:12.303 "assigned_rate_limits": { 00:04:12.303 "rw_ios_per_sec": 0, 00:04:12.303 "rw_mbytes_per_sec": 0, 00:04:12.303 "r_mbytes_per_sec": 0, 00:04:12.303 "w_mbytes_per_sec": 0 00:04:12.303 }, 00:04:12.303 "claimed": false, 00:04:12.303 "zoned": false, 00:04:12.303 "supported_io_types": { 00:04:12.303 "read": true, 00:04:12.303 "write": true, 00:04:12.303 "unmap": true, 00:04:12.303 "flush": true, 00:04:12.303 "reset": true, 00:04:12.303 "nvme_admin": false, 00:04:12.303 "nvme_io": false, 00:04:12.303 "nvme_io_md": false, 00:04:12.303 "write_zeroes": true, 00:04:12.303 "zcopy": true, 00:04:12.303 "get_zone_info": false, 00:04:12.303 "zone_management": false, 00:04:12.303 "zone_append": false, 00:04:12.303 "compare": false, 00:04:12.303 "compare_and_write": false, 00:04:12.303 "abort": true, 00:04:12.303 "seek_hole": false, 00:04:12.303 "seek_data": false, 00:04:12.303 "copy": true, 00:04:12.303 "nvme_iov_md": false 00:04:12.303 }, 00:04:12.303 "memory_domains": [ 00:04:12.303 { 00:04:12.303 "dma_device_id": "system", 00:04:12.303 "dma_device_type": 1 00:04:12.303 }, 00:04:12.303 { 00:04:12.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:12.303 "dma_device_type": 2 00:04:12.303 } 00:04:12.303 ], 00:04:12.303 "driver_specific": { 00:04:12.303 "passthru": { 00:04:12.303 "name": "Passthru0", 00:04:12.303 "base_bdev_name": "Malloc2" 00:04:12.303 } 00:04:12.303 } 00:04:12.303 } 00:04:12.303 ]' 00:04:12.303 21:26:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:12.303 21:26:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:12.303 21:26:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:12.303 21:26:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.303 21:26:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.303 21:26:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.303 21:26:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:12.303 21:26:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.303 21:26:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.303 21:26:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.303 21:26:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:12.303 21:26:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.303 21:26:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.303 21:26:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.303 21:26:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:12.303 21:26:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:12.562 21:26:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:12.562 00:04:12.562 real 0m0.250s 00:04:12.562 user 0m0.162s 00:04:12.562 sys 0m0.028s 00:04:12.562 21:26:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.562 21:26:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.562 ************************************ 00:04:12.562 END TEST rpc_daemon_integrity 00:04:12.562 ************************************ 00:04:12.562 21:26:03 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:12.562 21:26:03 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:12.562 21:26:03 rpc -- rpc/rpc.sh@84 -- # killprocess 253804 00:04:12.562 21:26:03 rpc -- common/autotest_common.sh@948 -- # '[' -z 253804 ']' 00:04:12.562 21:26:03 rpc -- common/autotest_common.sh@952 -- # kill -0 253804 00:04:12.562 21:26:03 rpc -- common/autotest_common.sh@953 -- # uname 00:04:12.562 21:26:03 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:12.562 21:26:03 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 253804 00:04:12.562 21:26:03 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:12.562 21:26:03 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:12.562 21:26:03 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 253804' 00:04:12.562 killing process with pid 253804 00:04:12.562 21:26:03 rpc -- common/autotest_common.sh@967 -- # kill 253804 00:04:12.562 21:26:03 rpc -- common/autotest_common.sh@972 -- # wait 253804 00:04:12.819 00:04:12.819 real 0m1.916s 00:04:12.819 user 0m2.511s 00:04:12.819 sys 0m0.585s 00:04:12.819 21:26:03 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.819 21:26:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.819 ************************************ 00:04:12.819 END TEST rpc 00:04:12.819 ************************************ 00:04:12.819 21:26:03 -- common/autotest_common.sh@1142 -- # return 0 00:04:12.819 21:26:03 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:12.819 21:26:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.819 21:26:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.819 21:26:03 -- common/autotest_common.sh@10 -- # set +x 00:04:12.819 ************************************ 00:04:12.819 START TEST skip_rpc 00:04:12.819 ************************************ 00:04:12.819 21:26:03 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:12.819 * Looking for test storage... 00:04:12.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:12.819 21:26:03 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:12.819 21:26:03 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:12.819 21:26:03 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:12.819 21:26:03 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.819 21:26:03 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.819 21:26:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.077 ************************************ 00:04:13.077 START TEST skip_rpc 00:04:13.077 ************************************ 00:04:13.077 21:26:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:13.077 21:26:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=254172 00:04:13.077 21:26:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:13.077 21:26:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.077 21:26:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:13.077 [2024-07-15 21:26:03.686344] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:04:13.077 [2024-07-15 21:26:03.686451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid254172 ] 00:04:13.077 EAL: No free 2048 kB hugepages reported on node 1 00:04:13.077 [2024-07-15 21:26:03.748377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.077 [2024-07-15 21:26:03.868270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 254172 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 254172 ']' 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 254172 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 254172 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 254172' 00:04:18.348 killing process with pid 254172 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 254172 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 254172 00:04:18.348 00:04:18.348 real 0m5.311s 00:04:18.348 user 0m5.042s 00:04:18.348 sys 0m0.267s 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.348 21:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.348 ************************************ 00:04:18.348 END TEST skip_rpc 00:04:18.348 ************************************ 00:04:18.348 21:26:08 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:18.348 21:26:08 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:18.348 21:26:08 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.348 21:26:08 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.348 21:26:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.348 ************************************ 00:04:18.348 START TEST skip_rpc_with_json 00:04:18.348 ************************************ 00:04:18.348 21:26:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:18.348 21:26:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:18.348 21:26:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=254699 00:04:18.348 21:26:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:18.348 21:26:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.348 21:26:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 254699 00:04:18.348 21:26:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 254699 ']' 00:04:18.348 21:26:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.348 21:26:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:18.348 21:26:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.348 21:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:18.348 21:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.348 [2024-07-15 21:26:09.055916] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:04:18.348 [2024-07-15 21:26:09.056017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid254699 ] 00:04:18.348 EAL: No free 2048 kB hugepages reported on node 1 00:04:18.348 [2024-07-15 21:26:09.109772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.606 [2024-07-15 21:26:09.215660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.865 21:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:18.865 21:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:18.865 21:26:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:18.865 21:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.865 21:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.865 [2024-07-15 21:26:09.427988] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:18.865 request: 00:04:18.865 { 00:04:18.865 "trtype": "tcp", 00:04:18.865 "method": "nvmf_get_transports", 00:04:18.865 "req_id": 1 00:04:18.865 } 00:04:18.865 Got JSON-RPC error response 00:04:18.865 response: 00:04:18.865 { 00:04:18.865 "code": -19, 00:04:18.865 "message": "No such device" 00:04:18.866 } 00:04:18.866 21:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:18.866 21:26:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:18.866 21:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.866 21:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.866 [2024-07-15 21:26:09.436091] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:18.866 21:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.866 21:26:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:18.866 21:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.866 21:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.866 21:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.866 21:26:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:18.866 { 00:04:18.866 "subsystems": [ 00:04:18.866 { 00:04:18.866 "subsystem": "vfio_user_target", 00:04:18.866 "config": null 00:04:18.866 }, 00:04:18.866 { 00:04:18.866 "subsystem": "keyring", 00:04:18.866 "config": [] 00:04:18.866 }, 00:04:18.866 { 00:04:18.866 "subsystem": "iobuf", 00:04:18.866 "config": [ 00:04:18.866 { 00:04:18.866 "method": "iobuf_set_options", 00:04:18.866 "params": { 00:04:18.866 "small_pool_count": 8192, 00:04:18.866 "large_pool_count": 1024, 00:04:18.866 "small_bufsize": 8192, 00:04:18.866 "large_bufsize": 135168 00:04:18.866 } 00:04:18.866 } 00:04:18.866 ] 00:04:18.866 }, 00:04:18.866 { 00:04:18.866 "subsystem": "sock", 00:04:18.866 "config": [ 00:04:18.866 { 00:04:18.866 "method": "sock_set_default_impl", 00:04:18.866 "params": { 00:04:18.866 "impl_name": "posix" 00:04:18.866 } 00:04:18.866 }, 00:04:18.866 { 00:04:18.866 "method": "sock_impl_set_options", 00:04:18.866 "params": { 00:04:18.866 "impl_name": "ssl", 00:04:18.866 "recv_buf_size": 4096, 00:04:18.866 "send_buf_size": 4096, 00:04:18.866 "enable_recv_pipe": true, 00:04:18.866 "enable_quickack": false, 00:04:18.866 "enable_placement_id": 0, 00:04:18.866 "enable_zerocopy_send_server": true, 00:04:18.866 "enable_zerocopy_send_client": false, 00:04:18.866 "zerocopy_threshold": 0, 00:04:18.866 "tls_version": 0, 00:04:18.866 "enable_ktls": false 00:04:18.866 } 00:04:18.866 }, 00:04:18.866 { 00:04:18.866 "method": "sock_impl_set_options", 00:04:18.866 "params": { 00:04:18.866 "impl_name": "posix", 00:04:18.866 "recv_buf_size": 2097152, 00:04:18.866 "send_buf_size": 2097152, 00:04:18.866 "enable_recv_pipe": true, 00:04:18.866 "enable_quickack": false, 00:04:18.866 "enable_placement_id": 0, 00:04:18.866 "enable_zerocopy_send_server": true, 00:04:18.866 "enable_zerocopy_send_client": false, 00:04:18.866 "zerocopy_threshold": 0, 00:04:18.866 "tls_version": 0, 00:04:18.866 "enable_ktls": false 00:04:18.866 } 00:04:18.866 } 00:04:18.866 ] 00:04:18.866 }, 00:04:18.866 { 00:04:18.866 "subsystem": "vmd", 00:04:18.866 "config": [] 00:04:18.866 }, 00:04:18.866 { 00:04:18.866 "subsystem": "accel", 00:04:18.866 "config": [ 00:04:18.866 { 00:04:18.866 "method": "accel_set_options", 00:04:18.866 "params": { 00:04:18.866 "small_cache_size": 128, 00:04:18.866 "large_cache_size": 16, 00:04:18.866 "task_count": 2048, 00:04:18.866 "sequence_count": 2048, 00:04:18.866 "buf_count": 2048 00:04:18.866 } 00:04:18.866 } 00:04:18.866 ] 00:04:18.866 }, 00:04:18.866 { 00:04:18.866 "subsystem": "bdev", 00:04:18.866 "config": [ 00:04:18.866 { 00:04:18.866 "method": "bdev_set_options", 00:04:18.866 "params": { 00:04:18.866 "bdev_io_pool_size": 65535, 00:04:18.866 "bdev_io_cache_size": 256, 00:04:18.866 "bdev_auto_examine": true, 00:04:18.866 "iobuf_small_cache_size": 128, 00:04:18.866 "iobuf_large_cache_size": 16 00:04:18.866 } 00:04:18.866 }, 00:04:18.866 { 00:04:18.866 "method": "bdev_raid_set_options", 00:04:18.866 "params": { 00:04:18.866 "process_window_size_kb": 1024 00:04:18.866 } 00:04:18.866 }, 00:04:18.866 { 00:04:18.866 "method": "bdev_iscsi_set_options", 00:04:18.866 "params": { 00:04:18.866 "timeout_sec": 30 00:04:18.866 } 00:04:18.866 }, 00:04:18.866 { 00:04:18.866 "method": "bdev_nvme_set_options", 00:04:18.866 "params": { 00:04:18.866 "action_on_timeout": "none", 00:04:18.866 "timeout_us": 0, 00:04:18.866 "timeout_admin_us": 0, 00:04:18.866 "keep_alive_timeout_ms": 10000, 00:04:18.866 "arbitration_burst": 0, 00:04:18.866 "low_priority_weight": 0, 00:04:18.866 "medium_priority_weight": 0, 00:04:18.866 "high_priority_weight": 0, 00:04:18.866 "nvme_adminq_poll_period_us": 10000, 00:04:18.866 "nvme_ioq_poll_period_us": 0, 00:04:18.866 "io_queue_requests": 0, 00:04:18.866 "delay_cmd_submit": true, 00:04:18.866 "transport_retry_count": 4, 00:04:18.866 "bdev_retry_count": 3, 00:04:18.866 "transport_ack_timeout": 0, 00:04:18.866 "ctrlr_loss_timeout_sec": 0, 00:04:18.866 "reconnect_delay_sec": 0, 00:04:18.866 "fast_io_fail_timeout_sec": 0, 00:04:18.866 "disable_auto_failback": false, 00:04:18.866 "generate_uuids": false, 00:04:18.866 "transport_tos": 0, 00:04:18.866 "nvme_error_stat": false, 00:04:18.866 "rdma_srq_size": 0, 00:04:18.866 "io_path_stat": false, 00:04:18.866 "allow_accel_sequence": false, 00:04:18.866 "rdma_max_cq_size": 0, 00:04:18.866 "rdma_cm_event_timeout_ms": 0, 00:04:18.866 "dhchap_digests": [ 00:04:18.866 "sha256", 00:04:18.866 "sha384", 00:04:18.866 "sha512" 00:04:18.866 ], 00:04:18.866 "dhchap_dhgroups": [ 00:04:18.866 "null", 00:04:18.866 "ffdhe2048", 00:04:18.866 "ffdhe3072", 00:04:18.866 "ffdhe4096", 00:04:18.866 "ffdhe6144", 00:04:18.866 "ffdhe8192" 00:04:18.866 ] 00:04:18.866 } 00:04:18.866 }, 00:04:18.866 { 00:04:18.866 "method": "bdev_nvme_set_hotplug", 00:04:18.866 "params": { 00:04:18.866 "period_us": 100000, 00:04:18.866 "enable": false 00:04:18.866 } 00:04:18.866 }, 00:04:18.866 { 00:04:18.866 "method": "bdev_wait_for_examine" 00:04:18.866 } 00:04:18.866 ] 00:04:18.866 }, 00:04:18.866 { 00:04:18.866 "subsystem": "scsi", 00:04:18.866 "config": null 00:04:18.866 }, 00:04:18.866 { 00:04:18.866 "subsystem": "scheduler", 00:04:18.866 "config": [ 00:04:18.866 { 00:04:18.866 "method": "framework_set_scheduler", 00:04:18.866 "params": { 00:04:18.866 "name": "static" 00:04:18.866 } 00:04:18.866 } 00:04:18.866 ] 00:04:18.866 }, 00:04:18.866 { 00:04:18.866 "subsystem": "vhost_scsi", 00:04:18.866 "config": [] 00:04:18.866 }, 00:04:18.866 { 00:04:18.866 "subsystem": "vhost_blk", 00:04:18.866 "config": [] 00:04:18.866 }, 00:04:18.866 { 00:04:18.866 "subsystem": "ublk", 00:04:18.866 "config": [] 00:04:18.866 }, 00:04:18.866 { 00:04:18.866 "subsystem": "nbd", 00:04:18.866 "config": [] 00:04:18.866 }, 00:04:18.866 { 00:04:18.866 "subsystem": "nvmf", 00:04:18.866 "config": [ 00:04:18.866 { 00:04:18.866 "method": "nvmf_set_config", 00:04:18.867 "params": { 00:04:18.867 "discovery_filter": "match_any", 00:04:18.867 "admin_cmd_passthru": { 00:04:18.867 "identify_ctrlr": false 00:04:18.867 } 00:04:18.867 } 00:04:18.867 }, 00:04:18.867 { 00:04:18.867 "method": "nvmf_set_max_subsystems", 00:04:18.867 "params": { 00:04:18.867 "max_subsystems": 1024 00:04:18.867 } 00:04:18.867 }, 00:04:18.867 { 00:04:18.867 "method": "nvmf_set_crdt", 00:04:18.867 "params": { 00:04:18.867 "crdt1": 0, 00:04:18.867 "crdt2": 0, 00:04:18.867 "crdt3": 0 00:04:18.867 } 00:04:18.867 }, 00:04:18.867 { 00:04:18.867 "method": "nvmf_create_transport", 00:04:18.867 "params": { 00:04:18.867 "trtype": "TCP", 00:04:18.867 "max_queue_depth": 128, 00:04:18.867 "max_io_qpairs_per_ctrlr": 127, 00:04:18.867 "in_capsule_data_size": 4096, 00:04:18.867 "max_io_size": 131072, 00:04:18.867 "io_unit_size": 131072, 00:04:18.867 "max_aq_depth": 128, 00:04:18.867 "num_shared_buffers": 511, 00:04:18.867 "buf_cache_size": 4294967295, 00:04:18.867 "dif_insert_or_strip": false, 00:04:18.867 "zcopy": false, 00:04:18.867 "c2h_success": true, 00:04:18.867 "sock_priority": 0, 00:04:18.867 "abort_timeout_sec": 1, 00:04:18.867 "ack_timeout": 0, 00:04:18.867 "data_wr_pool_size": 0 00:04:18.867 } 00:04:18.867 } 00:04:18.867 ] 00:04:18.867 }, 00:04:18.867 { 00:04:18.867 "subsystem": "iscsi", 00:04:18.867 "config": [ 00:04:18.867 { 00:04:18.867 "method": "iscsi_set_options", 00:04:18.867 "params": { 00:04:18.867 "node_base": "iqn.2016-06.io.spdk", 00:04:18.867 "max_sessions": 128, 00:04:18.867 "max_connections_per_session": 2, 00:04:18.867 "max_queue_depth": 64, 00:04:18.867 "default_time2wait": 2, 00:04:18.867 "default_time2retain": 20, 00:04:18.867 "first_burst_length": 8192, 00:04:18.867 "immediate_data": true, 00:04:18.867 "allow_duplicated_isid": false, 00:04:18.867 "error_recovery_level": 0, 00:04:18.867 "nop_timeout": 60, 00:04:18.867 "nop_in_interval": 30, 00:04:18.867 "disable_chap": false, 00:04:18.867 "require_chap": false, 00:04:18.867 "mutual_chap": false, 00:04:18.867 "chap_group": 0, 00:04:18.867 "max_large_datain_per_connection": 64, 00:04:18.867 "max_r2t_per_connection": 4, 00:04:18.867 "pdu_pool_size": 36864, 00:04:18.867 "immediate_data_pool_size": 16384, 00:04:18.867 "data_out_pool_size": 2048 00:04:18.867 } 00:04:18.867 } 00:04:18.867 ] 00:04:18.867 } 00:04:18.867 ] 00:04:18.867 } 00:04:18.867 21:26:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:18.867 21:26:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 254699 00:04:18.867 21:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 254699 ']' 00:04:18.867 21:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 254699 00:04:18.867 21:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:18.867 21:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:18.867 21:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 254699 00:04:18.867 21:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:18.867 21:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:18.867 21:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 254699' 00:04:18.867 killing process with pid 254699 00:04:18.867 21:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 254699 00:04:18.867 21:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 254699 00:04:19.127 21:26:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=254807 00:04:19.127 21:26:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:19.127 21:26:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:24.389 21:26:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 254807 00:04:24.389 21:26:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 254807 ']' 00:04:24.389 21:26:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 254807 00:04:24.389 21:26:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:24.389 21:26:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:24.389 21:26:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 254807 00:04:24.389 21:26:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:24.389 21:26:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:24.389 21:26:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 254807' 00:04:24.389 killing process with pid 254807 00:04:24.389 21:26:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 254807 00:04:24.389 21:26:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 254807 00:04:24.647 21:26:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:24.647 21:26:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:24.647 00:04:24.647 real 0m6.217s 00:04:24.648 user 0m5.928s 00:04:24.648 sys 0m0.567s 00:04:24.648 21:26:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.648 21:26:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:24.648 ************************************ 00:04:24.648 END TEST skip_rpc_with_json 00:04:24.648 ************************************ 00:04:24.648 21:26:15 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:24.648 21:26:15 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:24.648 21:26:15 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.648 21:26:15 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.648 21:26:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.648 ************************************ 00:04:24.648 START TEST skip_rpc_with_delay 00:04:24.648 ************************************ 00:04:24.648 21:26:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:24.648 21:26:15 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:24.648 21:26:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:24.648 21:26:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:24.648 21:26:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.648 21:26:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:24.648 21:26:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.648 21:26:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:24.648 21:26:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.648 21:26:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:24.648 21:26:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.648 21:26:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:24.648 21:26:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:24.648 [2024-07-15 21:26:15.334467] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:24.648 [2024-07-15 21:26:15.334609] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:24.648 21:26:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:24.648 21:26:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:24.648 21:26:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:24.648 21:26:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:24.648 00:04:24.648 real 0m0.082s 00:04:24.648 user 0m0.052s 00:04:24.648 sys 0m0.029s 00:04:24.648 21:26:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.648 21:26:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:24.648 ************************************ 00:04:24.648 END TEST skip_rpc_with_delay 00:04:24.648 ************************************ 00:04:24.648 21:26:15 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:24.648 21:26:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:24.648 21:26:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:24.648 21:26:15 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:24.648 21:26:15 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.648 21:26:15 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.648 21:26:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.648 ************************************ 00:04:24.648 START TEST exit_on_failed_rpc_init 00:04:24.648 ************************************ 00:04:24.648 21:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:24.648 21:26:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=255361 00:04:24.648 21:26:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:24.648 21:26:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 255361 00:04:24.648 21:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 255361 ']' 00:04:24.648 21:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.648 21:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:24.648 21:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.648 21:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:24.648 21:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:24.906 [2024-07-15 21:26:15.471207] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:04:24.906 [2024-07-15 21:26:15.471317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid255361 ] 00:04:24.906 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.906 [2024-07-15 21:26:15.545271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.165 [2024-07-15 21:26:15.701038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.165 21:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:25.165 21:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:25.165 21:26:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.165 21:26:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:25.165 21:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:25.165 21:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:25.165 21:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.165 21:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:25.165 21:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.165 21:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:25.165 21:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.165 21:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:25.165 21:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:25.165 21:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:25.165 21:26:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:25.423 [2024-07-15 21:26:15.975640] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:04:25.423 [2024-07-15 21:26:15.975723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid255376 ] 00:04:25.423 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.423 [2024-07-15 21:26:16.023565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.423 [2024-07-15 21:26:16.126556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.423 [2024-07-15 21:26:16.126646] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:25.423 [2024-07-15 21:26:16.126663] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:25.423 [2024-07-15 21:26:16.126675] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:25.682 21:26:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:25.682 21:26:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:25.682 21:26:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:25.682 21:26:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:25.682 21:26:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:25.682 21:26:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:25.682 21:26:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:25.682 21:26:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 255361 00:04:25.682 21:26:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 255361 ']' 00:04:25.682 21:26:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 255361 00:04:25.682 21:26:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:25.682 21:26:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:25.682 21:26:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 255361 00:04:25.682 21:26:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:25.682 21:26:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:25.682 21:26:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 255361' 00:04:25.682 killing process with pid 255361 00:04:25.682 21:26:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 255361 00:04:25.682 21:26:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 255361 00:04:25.941 00:04:25.941 real 0m1.131s 00:04:25.941 user 0m1.397s 00:04:25.941 sys 0m0.416s 00:04:25.941 21:26:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.941 21:26:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:25.941 ************************************ 00:04:25.941 END TEST exit_on_failed_rpc_init 00:04:25.941 ************************************ 00:04:25.941 21:26:16 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:25.941 21:26:16 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:25.941 00:04:25.941 real 0m13.027s 00:04:25.941 user 0m12.530s 00:04:25.941 sys 0m1.470s 00:04:25.941 21:26:16 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.941 21:26:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.941 ************************************ 00:04:25.941 END TEST skip_rpc 00:04:25.941 ************************************ 00:04:25.941 21:26:16 -- common/autotest_common.sh@1142 -- # return 0 00:04:25.941 21:26:16 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:25.941 21:26:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.941 21:26:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.941 21:26:16 -- common/autotest_common.sh@10 -- # set +x 00:04:25.941 ************************************ 00:04:25.941 START TEST rpc_client 00:04:25.941 ************************************ 00:04:25.941 21:26:16 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:25.941 * Looking for test storage... 00:04:25.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:25.941 21:26:16 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:25.941 OK 00:04:25.941 21:26:16 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:25.941 00:04:25.941 real 0m0.070s 00:04:25.941 user 0m0.029s 00:04:25.941 sys 0m0.046s 00:04:25.941 21:26:16 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.941 21:26:16 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:25.941 ************************************ 00:04:25.941 END TEST rpc_client 00:04:25.941 ************************************ 00:04:25.941 21:26:16 -- common/autotest_common.sh@1142 -- # return 0 00:04:25.941 21:26:16 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:25.941 21:26:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.941 21:26:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.941 21:26:16 -- common/autotest_common.sh@10 -- # set +x 00:04:26.200 ************************************ 00:04:26.200 START TEST json_config 00:04:26.200 ************************************ 00:04:26.200 21:26:16 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:26.200 21:26:16 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:26.200 21:26:16 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:26.200 21:26:16 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:26.200 21:26:16 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:26.200 21:26:16 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:26.200 21:26:16 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:26.200 21:26:16 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:26.200 21:26:16 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:26.200 21:26:16 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:26.200 21:26:16 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:26.200 21:26:16 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:26.200 21:26:16 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:26.201 21:26:16 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:04:26.201 21:26:16 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:04:26.201 21:26:16 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:26.201 21:26:16 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:26.201 21:26:16 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:26.201 21:26:16 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:26.201 21:26:16 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:26.201 21:26:16 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:26.201 21:26:16 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:26.201 21:26:16 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:26.201 21:26:16 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.201 21:26:16 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.201 21:26:16 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.201 21:26:16 json_config -- paths/export.sh@5 -- # export PATH 00:04:26.201 21:26:16 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.201 21:26:16 json_config -- nvmf/common.sh@47 -- # : 0 00:04:26.201 21:26:16 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:26.201 21:26:16 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:26.201 21:26:16 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:26.201 21:26:16 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:26.201 21:26:16 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:26.201 21:26:16 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:26.201 21:26:16 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:26.201 21:26:16 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:26.201 21:26:16 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:26.201 21:26:16 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:26.201 21:26:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:26.201 21:26:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:26.201 21:26:16 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:26.201 21:26:16 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:26.201 21:26:16 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:26.201 21:26:16 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:26.201 21:26:16 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:26.201 21:26:16 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:26.201 21:26:16 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:26.201 21:26:16 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:26.201 21:26:16 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:26.201 21:26:16 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:26.201 21:26:16 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:26.201 21:26:16 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:26.201 INFO: JSON configuration test init 00:04:26.201 21:26:16 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:26.201 21:26:16 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:26.201 21:26:16 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:26.201 21:26:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.201 21:26:16 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:26.201 21:26:16 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:26.201 21:26:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.201 21:26:16 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:26.201 21:26:16 json_config -- json_config/common.sh@9 -- # local app=target 00:04:26.201 21:26:16 json_config -- json_config/common.sh@10 -- # shift 00:04:26.201 21:26:16 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:26.201 21:26:16 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:26.201 21:26:16 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:26.201 21:26:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:26.201 21:26:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:26.201 21:26:16 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=255584 00:04:26.201 21:26:16 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:26.201 21:26:16 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:26.201 Waiting for target to run... 00:04:26.201 21:26:16 json_config -- json_config/common.sh@25 -- # waitforlisten 255584 /var/tmp/spdk_tgt.sock 00:04:26.201 21:26:16 json_config -- common/autotest_common.sh@829 -- # '[' -z 255584 ']' 00:04:26.201 21:26:16 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:26.201 21:26:16 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:26.201 21:26:16 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:26.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:26.201 21:26:16 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:26.201 21:26:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.201 [2024-07-15 21:26:16.864133] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:04:26.201 [2024-07-15 21:26:16.864253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid255584 ] 00:04:26.201 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.459 [2024-07-15 21:26:17.226225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.718 [2024-07-15 21:26:17.322056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.309 21:26:17 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:27.309 21:26:17 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:27.309 21:26:17 json_config -- json_config/common.sh@26 -- # echo '' 00:04:27.309 00:04:27.309 21:26:17 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:27.309 21:26:17 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:27.309 21:26:17 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:27.309 21:26:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.309 21:26:17 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:27.309 21:26:17 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:27.309 21:26:17 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:27.309 21:26:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.309 21:26:17 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:27.309 21:26:17 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:27.309 21:26:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:30.589 21:26:21 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:30.589 21:26:21 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:30.589 21:26:21 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:30.589 21:26:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.589 21:26:21 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:30.589 21:26:21 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:30.589 21:26:21 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:30.589 21:26:21 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:30.589 21:26:21 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:30.589 21:26:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:30.589 21:26:21 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:30.589 21:26:21 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:30.589 21:26:21 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:30.589 21:26:21 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:30.589 21:26:21 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:30.589 21:26:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.589 21:26:21 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:30.589 21:26:21 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:30.589 21:26:21 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:30.589 21:26:21 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:30.589 21:26:21 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:30.589 21:26:21 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:30.589 21:26:21 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:30.589 21:26:21 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:30.589 21:26:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.589 21:26:21 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:30.589 21:26:21 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:30.589 21:26:21 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:30.589 21:26:21 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:30.589 21:26:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:30.846 MallocForNvmf0 00:04:30.846 21:26:21 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:30.846 21:26:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:31.103 MallocForNvmf1 00:04:31.103 21:26:21 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:31.103 21:26:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:31.361 [2024-07-15 21:26:22.122109] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:31.361 21:26:22 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:31.361 21:26:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:31.618 21:26:22 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:31.618 21:26:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:31.875 21:26:22 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:31.875 21:26:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:32.132 21:26:22 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:32.132 21:26:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:32.389 [2024-07-15 21:26:23.097069] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:32.389 21:26:23 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:32.389 21:26:23 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:32.389 21:26:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.389 21:26:23 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:32.389 21:26:23 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:32.389 21:26:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.389 21:26:23 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:32.389 21:26:23 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:32.389 21:26:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:32.655 MallocBdevForConfigChangeCheck 00:04:32.655 21:26:23 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:32.655 21:26:23 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:32.655 21:26:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.655 21:26:23 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:32.655 21:26:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:33.219 21:26:23 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:33.219 INFO: shutting down applications... 00:04:33.219 21:26:23 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:33.219 21:26:23 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:33.219 21:26:23 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:33.219 21:26:23 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:35.117 Calling clear_iscsi_subsystem 00:04:35.117 Calling clear_nvmf_subsystem 00:04:35.117 Calling clear_nbd_subsystem 00:04:35.117 Calling clear_ublk_subsystem 00:04:35.117 Calling clear_vhost_blk_subsystem 00:04:35.117 Calling clear_vhost_scsi_subsystem 00:04:35.117 Calling clear_bdev_subsystem 00:04:35.117 21:26:25 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:35.117 21:26:25 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:35.117 21:26:25 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:35.117 21:26:25 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.117 21:26:25 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:35.118 21:26:25 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:35.118 21:26:25 json_config -- json_config/json_config.sh@345 -- # break 00:04:35.118 21:26:25 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:35.118 21:26:25 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:35.118 21:26:25 json_config -- json_config/common.sh@31 -- # local app=target 00:04:35.118 21:26:25 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:35.118 21:26:25 json_config -- json_config/common.sh@35 -- # [[ -n 255584 ]] 00:04:35.118 21:26:25 json_config -- json_config/common.sh@38 -- # kill -SIGINT 255584 00:04:35.118 21:26:25 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:35.118 21:26:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:35.118 21:26:25 json_config -- json_config/common.sh@41 -- # kill -0 255584 00:04:35.118 21:26:25 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:35.686 21:26:26 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:35.686 21:26:26 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:35.686 21:26:26 json_config -- json_config/common.sh@41 -- # kill -0 255584 00:04:35.686 21:26:26 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:35.686 21:26:26 json_config -- json_config/common.sh@43 -- # break 00:04:35.686 21:26:26 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:35.686 21:26:26 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:35.686 SPDK target shutdown done 00:04:35.686 21:26:26 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:35.686 INFO: relaunching applications... 00:04:35.686 21:26:26 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.686 21:26:26 json_config -- json_config/common.sh@9 -- # local app=target 00:04:35.686 21:26:26 json_config -- json_config/common.sh@10 -- # shift 00:04:35.686 21:26:26 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:35.686 21:26:26 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:35.686 21:26:26 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:35.686 21:26:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:35.686 21:26:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:35.686 21:26:26 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=256588 00:04:35.686 21:26:26 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:35.686 Waiting for target to run... 00:04:35.686 21:26:26 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.686 21:26:26 json_config -- json_config/common.sh@25 -- # waitforlisten 256588 /var/tmp/spdk_tgt.sock 00:04:35.686 21:26:26 json_config -- common/autotest_common.sh@829 -- # '[' -z 256588 ']' 00:04:35.686 21:26:26 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:35.686 21:26:26 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:35.686 21:26:26 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:35.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:35.686 21:26:26 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:35.686 21:26:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.686 [2024-07-15 21:26:26.416416] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:04:35.686 [2024-07-15 21:26:26.416510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid256588 ] 00:04:35.686 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.944 [2024-07-15 21:26:26.718653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.202 [2024-07-15 21:26:26.812799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.480 [2024-07-15 21:26:29.835581] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:39.480 [2024-07-15 21:26:29.867900] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:39.480 21:26:29 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.480 21:26:29 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:39.480 21:26:29 json_config -- json_config/common.sh@26 -- # echo '' 00:04:39.480 00:04:39.480 21:26:29 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:39.480 21:26:29 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:39.480 INFO: Checking if target configuration is the same... 00:04:39.480 21:26:29 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:39.480 21:26:29 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:39.480 21:26:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:39.480 + '[' 2 -ne 2 ']' 00:04:39.480 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:39.480 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:39.480 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:39.480 +++ basename /dev/fd/62 00:04:39.480 ++ mktemp /tmp/62.XXX 00:04:39.480 + tmp_file_1=/tmp/62.sKu 00:04:39.480 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:39.480 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:39.480 + tmp_file_2=/tmp/spdk_tgt_config.json.VqQ 00:04:39.480 + ret=0 00:04:39.480 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:39.480 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:39.738 + diff -u /tmp/62.sKu /tmp/spdk_tgt_config.json.VqQ 00:04:39.738 + echo 'INFO: JSON config files are the same' 00:04:39.738 INFO: JSON config files are the same 00:04:39.738 + rm /tmp/62.sKu /tmp/spdk_tgt_config.json.VqQ 00:04:39.738 + exit 0 00:04:39.738 21:26:30 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:39.738 21:26:30 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:39.738 INFO: changing configuration and checking if this can be detected... 00:04:39.738 21:26:30 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:39.738 21:26:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:39.995 21:26:30 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:39.995 21:26:30 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:39.995 21:26:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:39.995 + '[' 2 -ne 2 ']' 00:04:39.995 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:39.995 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:39.995 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:39.995 +++ basename /dev/fd/62 00:04:39.995 ++ mktemp /tmp/62.XXX 00:04:39.995 + tmp_file_1=/tmp/62.GX1 00:04:39.995 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:39.995 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:39.995 + tmp_file_2=/tmp/spdk_tgt_config.json.Ru9 00:04:39.995 + ret=0 00:04:39.995 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:40.252 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:40.509 + diff -u /tmp/62.GX1 /tmp/spdk_tgt_config.json.Ru9 00:04:40.509 + ret=1 00:04:40.509 + echo '=== Start of file: /tmp/62.GX1 ===' 00:04:40.509 + cat /tmp/62.GX1 00:04:40.509 + echo '=== End of file: /tmp/62.GX1 ===' 00:04:40.509 + echo '' 00:04:40.509 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Ru9 ===' 00:04:40.509 + cat /tmp/spdk_tgt_config.json.Ru9 00:04:40.509 + echo '=== End of file: /tmp/spdk_tgt_config.json.Ru9 ===' 00:04:40.509 + echo '' 00:04:40.509 + rm /tmp/62.GX1 /tmp/spdk_tgt_config.json.Ru9 00:04:40.509 + exit 1 00:04:40.509 21:26:31 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:40.509 INFO: configuration change detected. 00:04:40.509 21:26:31 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:40.509 21:26:31 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:40.509 21:26:31 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:40.509 21:26:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.509 21:26:31 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:40.509 21:26:31 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:40.509 21:26:31 json_config -- json_config/json_config.sh@317 -- # [[ -n 256588 ]] 00:04:40.509 21:26:31 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:40.509 21:26:31 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:40.509 21:26:31 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:40.509 21:26:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.509 21:26:31 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:40.509 21:26:31 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:40.509 21:26:31 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:40.509 21:26:31 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:40.509 21:26:31 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:40.509 21:26:31 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:40.509 21:26:31 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:40.509 21:26:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.509 21:26:31 json_config -- json_config/json_config.sh@323 -- # killprocess 256588 00:04:40.509 21:26:31 json_config -- common/autotest_common.sh@948 -- # '[' -z 256588 ']' 00:04:40.509 21:26:31 json_config -- common/autotest_common.sh@952 -- # kill -0 256588 00:04:40.509 21:26:31 json_config -- common/autotest_common.sh@953 -- # uname 00:04:40.509 21:26:31 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:40.509 21:26:31 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 256588 00:04:40.509 21:26:31 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:40.509 21:26:31 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:40.509 21:26:31 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 256588' 00:04:40.509 killing process with pid 256588 00:04:40.509 21:26:31 json_config -- common/autotest_common.sh@967 -- # kill 256588 00:04:40.509 21:26:31 json_config -- common/autotest_common.sh@972 -- # wait 256588 00:04:41.880 21:26:32 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:41.880 21:26:32 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:41.880 21:26:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:41.880 21:26:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.139 21:26:32 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:42.140 21:26:32 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:42.140 INFO: Success 00:04:42.140 00:04:42.140 real 0m15.939s 00:04:42.140 user 0m18.219s 00:04:42.140 sys 0m1.839s 00:04:42.140 21:26:32 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.140 21:26:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.140 ************************************ 00:04:42.140 END TEST json_config 00:04:42.140 ************************************ 00:04:42.140 21:26:32 -- common/autotest_common.sh@1142 -- # return 0 00:04:42.140 21:26:32 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:42.140 21:26:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.140 21:26:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.140 21:26:32 -- common/autotest_common.sh@10 -- # set +x 00:04:42.140 ************************************ 00:04:42.140 START TEST json_config_extra_key 00:04:42.140 ************************************ 00:04:42.140 21:26:32 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:42.140 21:26:32 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:42.140 21:26:32 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.140 21:26:32 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.140 21:26:32 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.140 21:26:32 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.140 21:26:32 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.140 21:26:32 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.140 21:26:32 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:42.140 21:26:32 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:42.140 21:26:32 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:42.140 21:26:32 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:42.140 21:26:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:42.140 21:26:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:42.140 21:26:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:42.140 21:26:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:42.140 21:26:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:42.140 21:26:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:42.140 21:26:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:42.140 21:26:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:42.140 21:26:32 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:42.140 21:26:32 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:42.140 INFO: launching applications... 00:04:42.140 21:26:32 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:42.140 21:26:32 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:42.140 21:26:32 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:42.140 21:26:32 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:42.140 21:26:32 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:42.140 21:26:32 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:42.140 21:26:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.140 21:26:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.140 21:26:32 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=257236 00:04:42.140 21:26:32 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:42.140 21:26:32 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:42.140 Waiting for target to run... 00:04:42.140 21:26:32 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 257236 /var/tmp/spdk_tgt.sock 00:04:42.140 21:26:32 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 257236 ']' 00:04:42.140 21:26:32 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:42.140 21:26:32 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.140 21:26:32 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:42.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:42.140 21:26:32 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.140 21:26:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:42.140 [2024-07-15 21:26:32.857678] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:04:42.140 [2024-07-15 21:26:32.857782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid257236 ] 00:04:42.140 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.708 [2024-07-15 21:26:33.227293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.708 [2024-07-15 21:26:33.305333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.272 21:26:33 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.272 21:26:33 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:43.272 21:26:33 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:43.272 00:04:43.272 21:26:33 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:43.272 INFO: shutting down applications... 00:04:43.272 21:26:33 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:43.272 21:26:33 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:43.272 21:26:33 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:43.272 21:26:33 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 257236 ]] 00:04:43.272 21:26:33 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 257236 00:04:43.272 21:26:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:43.272 21:26:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.272 21:26:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 257236 00:04:43.272 21:26:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:43.840 21:26:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:43.840 21:26:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.840 21:26:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 257236 00:04:43.840 21:26:34 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:43.840 21:26:34 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:43.840 21:26:34 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:43.840 21:26:34 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:43.840 SPDK target shutdown done 00:04:43.840 21:26:34 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:43.840 Success 00:04:43.840 00:04:43.840 real 0m1.594s 00:04:43.840 user 0m1.411s 00:04:43.840 sys 0m0.470s 00:04:43.840 21:26:34 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.840 21:26:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:43.840 ************************************ 00:04:43.840 END TEST json_config_extra_key 00:04:43.840 ************************************ 00:04:43.840 21:26:34 -- common/autotest_common.sh@1142 -- # return 0 00:04:43.840 21:26:34 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:43.840 21:26:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.840 21:26:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.840 21:26:34 -- common/autotest_common.sh@10 -- # set +x 00:04:43.840 ************************************ 00:04:43.840 START TEST alias_rpc 00:04:43.840 ************************************ 00:04:43.840 21:26:34 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:43.840 * Looking for test storage... 00:04:43.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:43.840 21:26:34 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:43.840 21:26:34 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=257480 00:04:43.840 21:26:34 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 257480 00:04:43.840 21:26:34 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.840 21:26:34 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 257480 ']' 00:04:43.840 21:26:34 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.840 21:26:34 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.840 21:26:34 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.840 21:26:34 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.840 21:26:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.840 [2024-07-15 21:26:34.500504] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:04:43.840 [2024-07-15 21:26:34.500615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid257480 ] 00:04:43.840 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.840 [2024-07-15 21:26:34.563083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.099 [2024-07-15 21:26:34.679581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.357 21:26:34 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.357 21:26:34 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:44.357 21:26:34 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:44.615 21:26:35 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 257480 00:04:44.615 21:26:35 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 257480 ']' 00:04:44.615 21:26:35 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 257480 00:04:44.615 21:26:35 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:44.615 21:26:35 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:44.615 21:26:35 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 257480 00:04:44.615 21:26:35 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:44.615 21:26:35 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:44.615 21:26:35 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 257480' 00:04:44.615 killing process with pid 257480 00:04:44.615 21:26:35 alias_rpc -- common/autotest_common.sh@967 -- # kill 257480 00:04:44.615 21:26:35 alias_rpc -- common/autotest_common.sh@972 -- # wait 257480 00:04:44.873 00:04:44.873 real 0m1.138s 00:04:44.873 user 0m1.340s 00:04:44.873 sys 0m0.382s 00:04:44.873 21:26:35 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.873 21:26:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.873 ************************************ 00:04:44.873 END TEST alias_rpc 00:04:44.873 ************************************ 00:04:44.873 21:26:35 -- common/autotest_common.sh@1142 -- # return 0 00:04:44.873 21:26:35 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:44.873 21:26:35 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:44.873 21:26:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.873 21:26:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.873 21:26:35 -- common/autotest_common.sh@10 -- # set +x 00:04:44.873 ************************************ 00:04:44.873 START TEST spdkcli_tcp 00:04:44.873 ************************************ 00:04:44.873 21:26:35 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:44.873 * Looking for test storage... 00:04:44.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:44.873 21:26:35 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:44.873 21:26:35 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:44.873 21:26:35 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:44.873 21:26:35 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:44.873 21:26:35 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:44.873 21:26:35 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:44.873 21:26:35 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:44.873 21:26:35 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:44.873 21:26:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:44.873 21:26:35 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=257639 00:04:44.873 21:26:35 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:44.873 21:26:35 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 257639 00:04:44.873 21:26:35 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 257639 ']' 00:04:44.873 21:26:35 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.873 21:26:35 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.873 21:26:35 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.873 21:26:35 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.873 21:26:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:45.131 [2024-07-15 21:26:35.704483] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:04:45.131 [2024-07-15 21:26:35.704590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid257639 ] 00:04:45.131 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.131 [2024-07-15 21:26:35.770558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:45.131 [2024-07-15 21:26:35.902319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.131 [2024-07-15 21:26:35.902326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.389 21:26:36 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:45.389 21:26:36 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:45.389 21:26:36 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=257643 00:04:45.389 21:26:36 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:45.389 21:26:36 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:45.647 [ 00:04:45.647 "bdev_malloc_delete", 00:04:45.647 "bdev_malloc_create", 00:04:45.647 "bdev_null_resize", 00:04:45.647 "bdev_null_delete", 00:04:45.647 "bdev_null_create", 00:04:45.647 "bdev_nvme_cuse_unregister", 00:04:45.647 "bdev_nvme_cuse_register", 00:04:45.647 "bdev_opal_new_user", 00:04:45.647 "bdev_opal_set_lock_state", 00:04:45.647 "bdev_opal_delete", 00:04:45.647 "bdev_opal_get_info", 00:04:45.647 "bdev_opal_create", 00:04:45.647 "bdev_nvme_opal_revert", 00:04:45.647 "bdev_nvme_opal_init", 00:04:45.647 "bdev_nvme_send_cmd", 00:04:45.647 "bdev_nvme_get_path_iostat", 00:04:45.647 "bdev_nvme_get_mdns_discovery_info", 00:04:45.647 "bdev_nvme_stop_mdns_discovery", 00:04:45.647 "bdev_nvme_start_mdns_discovery", 00:04:45.647 "bdev_nvme_set_multipath_policy", 00:04:45.647 "bdev_nvme_set_preferred_path", 00:04:45.647 "bdev_nvme_get_io_paths", 00:04:45.647 "bdev_nvme_remove_error_injection", 00:04:45.647 "bdev_nvme_add_error_injection", 00:04:45.647 "bdev_nvme_get_discovery_info", 00:04:45.647 "bdev_nvme_stop_discovery", 00:04:45.647 "bdev_nvme_start_discovery", 00:04:45.647 "bdev_nvme_get_controller_health_info", 00:04:45.647 "bdev_nvme_disable_controller", 00:04:45.647 "bdev_nvme_enable_controller", 00:04:45.647 "bdev_nvme_reset_controller", 00:04:45.647 "bdev_nvme_get_transport_statistics", 00:04:45.647 "bdev_nvme_apply_firmware", 00:04:45.647 "bdev_nvme_detach_controller", 00:04:45.647 "bdev_nvme_get_controllers", 00:04:45.647 "bdev_nvme_attach_controller", 00:04:45.647 "bdev_nvme_set_hotplug", 00:04:45.647 "bdev_nvme_set_options", 00:04:45.647 "bdev_passthru_delete", 00:04:45.647 "bdev_passthru_create", 00:04:45.647 "bdev_lvol_set_parent_bdev", 00:04:45.647 "bdev_lvol_set_parent", 00:04:45.647 "bdev_lvol_check_shallow_copy", 00:04:45.647 "bdev_lvol_start_shallow_copy", 00:04:45.647 "bdev_lvol_grow_lvstore", 00:04:45.647 "bdev_lvol_get_lvols", 00:04:45.647 "bdev_lvol_get_lvstores", 00:04:45.647 "bdev_lvol_delete", 00:04:45.647 "bdev_lvol_set_read_only", 00:04:45.647 "bdev_lvol_resize", 00:04:45.647 "bdev_lvol_decouple_parent", 00:04:45.647 "bdev_lvol_inflate", 00:04:45.647 "bdev_lvol_rename", 00:04:45.647 "bdev_lvol_clone_bdev", 00:04:45.647 "bdev_lvol_clone", 00:04:45.647 "bdev_lvol_snapshot", 00:04:45.647 "bdev_lvol_create", 00:04:45.647 "bdev_lvol_delete_lvstore", 00:04:45.647 "bdev_lvol_rename_lvstore", 00:04:45.647 "bdev_lvol_create_lvstore", 00:04:45.647 "bdev_raid_set_options", 00:04:45.647 "bdev_raid_remove_base_bdev", 00:04:45.647 "bdev_raid_add_base_bdev", 00:04:45.647 "bdev_raid_delete", 00:04:45.647 "bdev_raid_create", 00:04:45.647 "bdev_raid_get_bdevs", 00:04:45.647 "bdev_error_inject_error", 00:04:45.647 "bdev_error_delete", 00:04:45.647 "bdev_error_create", 00:04:45.647 "bdev_split_delete", 00:04:45.647 "bdev_split_create", 00:04:45.647 "bdev_delay_delete", 00:04:45.647 "bdev_delay_create", 00:04:45.647 "bdev_delay_update_latency", 00:04:45.647 "bdev_zone_block_delete", 00:04:45.647 "bdev_zone_block_create", 00:04:45.647 "blobfs_create", 00:04:45.647 "blobfs_detect", 00:04:45.647 "blobfs_set_cache_size", 00:04:45.647 "bdev_aio_delete", 00:04:45.647 "bdev_aio_rescan", 00:04:45.647 "bdev_aio_create", 00:04:45.647 "bdev_ftl_set_property", 00:04:45.647 "bdev_ftl_get_properties", 00:04:45.647 "bdev_ftl_get_stats", 00:04:45.647 "bdev_ftl_unmap", 00:04:45.647 "bdev_ftl_unload", 00:04:45.647 "bdev_ftl_delete", 00:04:45.647 "bdev_ftl_load", 00:04:45.647 "bdev_ftl_create", 00:04:45.647 "bdev_virtio_attach_controller", 00:04:45.647 "bdev_virtio_scsi_get_devices", 00:04:45.647 "bdev_virtio_detach_controller", 00:04:45.647 "bdev_virtio_blk_set_hotplug", 00:04:45.647 "bdev_iscsi_delete", 00:04:45.647 "bdev_iscsi_create", 00:04:45.647 "bdev_iscsi_set_options", 00:04:45.647 "accel_error_inject_error", 00:04:45.647 "ioat_scan_accel_module", 00:04:45.647 "dsa_scan_accel_module", 00:04:45.647 "iaa_scan_accel_module", 00:04:45.647 "vfu_virtio_create_scsi_endpoint", 00:04:45.647 "vfu_virtio_scsi_remove_target", 00:04:45.647 "vfu_virtio_scsi_add_target", 00:04:45.647 "vfu_virtio_create_blk_endpoint", 00:04:45.647 "vfu_virtio_delete_endpoint", 00:04:45.647 "keyring_file_remove_key", 00:04:45.647 "keyring_file_add_key", 00:04:45.647 "keyring_linux_set_options", 00:04:45.647 "iscsi_get_histogram", 00:04:45.647 "iscsi_enable_histogram", 00:04:45.647 "iscsi_set_options", 00:04:45.647 "iscsi_get_auth_groups", 00:04:45.647 "iscsi_auth_group_remove_secret", 00:04:45.647 "iscsi_auth_group_add_secret", 00:04:45.647 "iscsi_delete_auth_group", 00:04:45.647 "iscsi_create_auth_group", 00:04:45.647 "iscsi_set_discovery_auth", 00:04:45.647 "iscsi_get_options", 00:04:45.647 "iscsi_target_node_request_logout", 00:04:45.647 "iscsi_target_node_set_redirect", 00:04:45.647 "iscsi_target_node_set_auth", 00:04:45.647 "iscsi_target_node_add_lun", 00:04:45.647 "iscsi_get_stats", 00:04:45.647 "iscsi_get_connections", 00:04:45.647 "iscsi_portal_group_set_auth", 00:04:45.647 "iscsi_start_portal_group", 00:04:45.647 "iscsi_delete_portal_group", 00:04:45.647 "iscsi_create_portal_group", 00:04:45.647 "iscsi_get_portal_groups", 00:04:45.647 "iscsi_delete_target_node", 00:04:45.647 "iscsi_target_node_remove_pg_ig_maps", 00:04:45.647 "iscsi_target_node_add_pg_ig_maps", 00:04:45.648 "iscsi_create_target_node", 00:04:45.648 "iscsi_get_target_nodes", 00:04:45.648 "iscsi_delete_initiator_group", 00:04:45.648 "iscsi_initiator_group_remove_initiators", 00:04:45.648 "iscsi_initiator_group_add_initiators", 00:04:45.648 "iscsi_create_initiator_group", 00:04:45.648 "iscsi_get_initiator_groups", 00:04:45.648 "nvmf_set_crdt", 00:04:45.648 "nvmf_set_config", 00:04:45.648 "nvmf_set_max_subsystems", 00:04:45.648 "nvmf_stop_mdns_prr", 00:04:45.648 "nvmf_publish_mdns_prr", 00:04:45.648 "nvmf_subsystem_get_listeners", 00:04:45.648 "nvmf_subsystem_get_qpairs", 00:04:45.648 "nvmf_subsystem_get_controllers", 00:04:45.648 "nvmf_get_stats", 00:04:45.648 "nvmf_get_transports", 00:04:45.648 "nvmf_create_transport", 00:04:45.648 "nvmf_get_targets", 00:04:45.648 "nvmf_delete_target", 00:04:45.648 "nvmf_create_target", 00:04:45.648 "nvmf_subsystem_allow_any_host", 00:04:45.648 "nvmf_subsystem_remove_host", 00:04:45.648 "nvmf_subsystem_add_host", 00:04:45.648 "nvmf_ns_remove_host", 00:04:45.648 "nvmf_ns_add_host", 00:04:45.648 "nvmf_subsystem_remove_ns", 00:04:45.648 "nvmf_subsystem_add_ns", 00:04:45.648 "nvmf_subsystem_listener_set_ana_state", 00:04:45.648 "nvmf_discovery_get_referrals", 00:04:45.648 "nvmf_discovery_remove_referral", 00:04:45.648 "nvmf_discovery_add_referral", 00:04:45.648 "nvmf_subsystem_remove_listener", 00:04:45.648 "nvmf_subsystem_add_listener", 00:04:45.648 "nvmf_delete_subsystem", 00:04:45.648 "nvmf_create_subsystem", 00:04:45.648 "nvmf_get_subsystems", 00:04:45.648 "env_dpdk_get_mem_stats", 00:04:45.648 "nbd_get_disks", 00:04:45.648 "nbd_stop_disk", 00:04:45.648 "nbd_start_disk", 00:04:45.648 "ublk_recover_disk", 00:04:45.648 "ublk_get_disks", 00:04:45.648 "ublk_stop_disk", 00:04:45.648 "ublk_start_disk", 00:04:45.648 "ublk_destroy_target", 00:04:45.648 "ublk_create_target", 00:04:45.648 "virtio_blk_create_transport", 00:04:45.648 "virtio_blk_get_transports", 00:04:45.648 "vhost_controller_set_coalescing", 00:04:45.648 "vhost_get_controllers", 00:04:45.648 "vhost_delete_controller", 00:04:45.648 "vhost_create_blk_controller", 00:04:45.648 "vhost_scsi_controller_remove_target", 00:04:45.648 "vhost_scsi_controller_add_target", 00:04:45.648 "vhost_start_scsi_controller", 00:04:45.648 "vhost_create_scsi_controller", 00:04:45.648 "thread_set_cpumask", 00:04:45.648 "framework_get_governor", 00:04:45.648 "framework_get_scheduler", 00:04:45.648 "framework_set_scheduler", 00:04:45.648 "framework_get_reactors", 00:04:45.648 "thread_get_io_channels", 00:04:45.648 "thread_get_pollers", 00:04:45.648 "thread_get_stats", 00:04:45.648 "framework_monitor_context_switch", 00:04:45.648 "spdk_kill_instance", 00:04:45.648 "log_enable_timestamps", 00:04:45.648 "log_get_flags", 00:04:45.648 "log_clear_flag", 00:04:45.648 "log_set_flag", 00:04:45.648 "log_get_level", 00:04:45.648 "log_set_level", 00:04:45.648 "log_get_print_level", 00:04:45.648 "log_set_print_level", 00:04:45.648 "framework_enable_cpumask_locks", 00:04:45.648 "framework_disable_cpumask_locks", 00:04:45.648 "framework_wait_init", 00:04:45.648 "framework_start_init", 00:04:45.648 "scsi_get_devices", 00:04:45.648 "bdev_get_histogram", 00:04:45.648 "bdev_enable_histogram", 00:04:45.648 "bdev_set_qos_limit", 00:04:45.648 "bdev_set_qd_sampling_period", 00:04:45.648 "bdev_get_bdevs", 00:04:45.648 "bdev_reset_iostat", 00:04:45.648 "bdev_get_iostat", 00:04:45.648 "bdev_examine", 00:04:45.648 "bdev_wait_for_examine", 00:04:45.648 "bdev_set_options", 00:04:45.648 "notify_get_notifications", 00:04:45.648 "notify_get_types", 00:04:45.648 "accel_get_stats", 00:04:45.648 "accel_set_options", 00:04:45.648 "accel_set_driver", 00:04:45.648 "accel_crypto_key_destroy", 00:04:45.648 "accel_crypto_keys_get", 00:04:45.648 "accel_crypto_key_create", 00:04:45.648 "accel_assign_opc", 00:04:45.648 "accel_get_module_info", 00:04:45.648 "accel_get_opc_assignments", 00:04:45.648 "vmd_rescan", 00:04:45.648 "vmd_remove_device", 00:04:45.648 "vmd_enable", 00:04:45.648 "sock_get_default_impl", 00:04:45.648 "sock_set_default_impl", 00:04:45.648 "sock_impl_set_options", 00:04:45.648 "sock_impl_get_options", 00:04:45.648 "iobuf_get_stats", 00:04:45.648 "iobuf_set_options", 00:04:45.648 "keyring_get_keys", 00:04:45.648 "framework_get_pci_devices", 00:04:45.648 "framework_get_config", 00:04:45.648 "framework_get_subsystems", 00:04:45.648 "vfu_tgt_set_base_path", 00:04:45.648 "trace_get_info", 00:04:45.648 "trace_get_tpoint_group_mask", 00:04:45.648 "trace_disable_tpoint_group", 00:04:45.648 "trace_enable_tpoint_group", 00:04:45.648 "trace_clear_tpoint_mask", 00:04:45.648 "trace_set_tpoint_mask", 00:04:45.648 "spdk_get_version", 00:04:45.648 "rpc_get_methods" 00:04:45.648 ] 00:04:45.648 21:26:36 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:45.648 21:26:36 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.648 21:26:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:45.648 21:26:36 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:45.648 21:26:36 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 257639 00:04:45.648 21:26:36 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 257639 ']' 00:04:45.648 21:26:36 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 257639 00:04:45.648 21:26:36 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:45.648 21:26:36 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:45.648 21:26:36 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 257639 00:04:45.905 21:26:36 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:45.906 21:26:36 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:45.906 21:26:36 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 257639' 00:04:45.906 killing process with pid 257639 00:04:45.906 21:26:36 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 257639 00:04:45.906 21:26:36 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 257639 00:04:46.162 00:04:46.162 real 0m1.164s 00:04:46.162 user 0m2.109s 00:04:46.162 sys 0m0.424s 00:04:46.162 21:26:36 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.162 21:26:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:46.162 ************************************ 00:04:46.162 END TEST spdkcli_tcp 00:04:46.162 ************************************ 00:04:46.162 21:26:36 -- common/autotest_common.sh@1142 -- # return 0 00:04:46.162 21:26:36 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:46.162 21:26:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.162 21:26:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.162 21:26:36 -- common/autotest_common.sh@10 -- # set +x 00:04:46.162 ************************************ 00:04:46.162 START TEST dpdk_mem_utility 00:04:46.162 ************************************ 00:04:46.162 21:26:36 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:46.162 * Looking for test storage... 00:04:46.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:46.162 21:26:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:46.162 21:26:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=257807 00:04:46.163 21:26:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 257807 00:04:46.163 21:26:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.163 21:26:36 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 257807 ']' 00:04:46.163 21:26:36 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.163 21:26:36 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.163 21:26:36 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.163 21:26:36 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.163 21:26:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:46.163 [2024-07-15 21:26:36.909762] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:04:46.163 [2024-07-15 21:26:36.909838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid257807 ] 00:04:46.163 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.419 [2024-07-15 21:26:36.958393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.419 [2024-07-15 21:26:37.056600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.675 21:26:37 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.675 21:26:37 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:46.675 21:26:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:46.675 21:26:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:46.675 21:26:37 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.675 21:26:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:46.675 { 00:04:46.675 "filename": "/tmp/spdk_mem_dump.txt" 00:04:46.675 } 00:04:46.675 21:26:37 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.675 21:26:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:46.675 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:46.675 1 heaps totaling size 814.000000 MiB 00:04:46.675 size: 814.000000 MiB heap id: 0 00:04:46.675 end heaps---------- 00:04:46.675 8 mempools totaling size 598.116089 MiB 00:04:46.675 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:46.675 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:46.675 size: 84.521057 MiB name: bdev_io_257807 00:04:46.675 size: 51.011292 MiB name: evtpool_257807 00:04:46.675 size: 50.003479 MiB name: msgpool_257807 00:04:46.675 size: 21.763794 MiB name: PDU_Pool 00:04:46.676 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:46.676 size: 0.026123 MiB name: Session_Pool 00:04:46.676 end mempools------- 00:04:46.676 6 memzones totaling size 4.142822 MiB 00:04:46.676 size: 1.000366 MiB name: RG_ring_0_257807 00:04:46.676 size: 1.000366 MiB name: RG_ring_1_257807 00:04:46.676 size: 1.000366 MiB name: RG_ring_4_257807 00:04:46.676 size: 1.000366 MiB name: RG_ring_5_257807 00:04:46.676 size: 0.125366 MiB name: RG_ring_2_257807 00:04:46.676 size: 0.015991 MiB name: RG_ring_3_257807 00:04:46.676 end memzones------- 00:04:46.676 21:26:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:46.676 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:46.676 list of free elements. size: 12.519348 MiB 00:04:46.676 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:46.676 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:46.676 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:46.676 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:46.676 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:46.676 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:46.676 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:46.676 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:46.676 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:46.676 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:46.676 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:46.676 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:46.676 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:46.676 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:46.676 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:46.676 list of standard malloc elements. size: 199.218079 MiB 00:04:46.676 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:46.676 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:46.676 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:46.676 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:46.676 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:46.676 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:46.676 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:46.676 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:46.676 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:46.676 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:46.676 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:46.676 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:46.676 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:46.676 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:46.676 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:46.676 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:46.676 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:46.676 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:46.676 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:46.676 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:46.676 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:46.676 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:46.676 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:46.676 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:46.676 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:46.676 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:46.676 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:46.676 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:46.676 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:46.676 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:46.676 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:46.676 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:46.676 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:46.676 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:46.676 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:46.676 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:46.676 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:46.676 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:46.676 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:46.676 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:46.676 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:46.676 list of memzone associated elements. size: 602.262573 MiB 00:04:46.676 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:46.676 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:46.676 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:46.676 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:46.676 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:46.676 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_257807_0 00:04:46.676 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:46.676 associated memzone info: size: 48.002930 MiB name: MP_evtpool_257807_0 00:04:46.676 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:46.676 associated memzone info: size: 48.002930 MiB name: MP_msgpool_257807_0 00:04:46.676 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:46.676 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:46.676 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:46.676 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:46.676 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:46.676 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_257807 00:04:46.676 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:46.676 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_257807 00:04:46.676 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:46.676 associated memzone info: size: 1.007996 MiB name: MP_evtpool_257807 00:04:46.676 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:46.676 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:46.676 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:46.676 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:46.676 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:46.676 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:46.676 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:46.676 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:46.676 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:46.676 associated memzone info: size: 1.000366 MiB name: RG_ring_0_257807 00:04:46.676 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:46.676 associated memzone info: size: 1.000366 MiB name: RG_ring_1_257807 00:04:46.676 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:46.676 associated memzone info: size: 1.000366 MiB name: RG_ring_4_257807 00:04:46.676 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:46.676 associated memzone info: size: 1.000366 MiB name: RG_ring_5_257807 00:04:46.676 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:46.676 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_257807 00:04:46.676 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:46.676 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:46.676 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:46.676 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:46.676 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:46.676 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:46.676 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:46.676 associated memzone info: size: 0.125366 MiB name: RG_ring_2_257807 00:04:46.676 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:46.676 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:46.676 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:46.676 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:46.676 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:46.676 associated memzone info: size: 0.015991 MiB name: RG_ring_3_257807 00:04:46.676 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:46.676 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:46.676 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:46.676 associated memzone info: size: 0.000183 MiB name: MP_msgpool_257807 00:04:46.676 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:46.676 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_257807 00:04:46.676 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:46.676 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:46.676 21:26:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:46.676 21:26:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 257807 00:04:46.676 21:26:37 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 257807 ']' 00:04:46.676 21:26:37 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 257807 00:04:46.676 21:26:37 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:46.676 21:26:37 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:46.676 21:26:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 257807 00:04:46.676 21:26:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:46.676 21:26:37 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:46.676 21:26:37 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 257807' 00:04:46.676 killing process with pid 257807 00:04:46.676 21:26:37 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 257807 00:04:46.676 21:26:37 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 257807 00:04:46.934 00:04:46.934 real 0m0.894s 00:04:46.934 user 0m0.942s 00:04:46.934 sys 0m0.334s 00:04:46.934 21:26:37 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.934 21:26:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:46.934 ************************************ 00:04:46.934 END TEST dpdk_mem_utility 00:04:46.934 ************************************ 00:04:46.934 21:26:37 -- common/autotest_common.sh@1142 -- # return 0 00:04:46.934 21:26:37 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:46.934 21:26:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.934 21:26:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.934 21:26:37 -- common/autotest_common.sh@10 -- # set +x 00:04:47.191 ************************************ 00:04:47.191 START TEST event 00:04:47.191 ************************************ 00:04:47.191 21:26:37 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:47.191 * Looking for test storage... 00:04:47.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:47.191 21:26:37 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:47.191 21:26:37 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:47.191 21:26:37 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:47.191 21:26:37 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:47.191 21:26:37 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.191 21:26:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.191 ************************************ 00:04:47.191 START TEST event_perf 00:04:47.191 ************************************ 00:04:47.191 21:26:37 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:47.191 Running I/O for 1 seconds...[2024-07-15 21:26:37.843097] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:04:47.191 [2024-07-15 21:26:37.843196] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid257966 ] 00:04:47.191 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.191 [2024-07-15 21:26:37.900657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:47.447 [2024-07-15 21:26:38.007482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.447 [2024-07-15 21:26:38.007507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.448 [2024-07-15 21:26:38.007558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:47.448 [2024-07-15 21:26:38.007561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.391 Running I/O for 1 seconds... 00:04:48.391 lcore 0: 268245 00:04:48.391 lcore 1: 268245 00:04:48.391 lcore 2: 268243 00:04:48.391 lcore 3: 268243 00:04:48.391 done. 00:04:48.391 00:04:48.391 real 0m1.275s 00:04:48.391 user 0m4.190s 00:04:48.391 sys 0m0.076s 00:04:48.391 21:26:39 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.391 21:26:39 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:48.391 ************************************ 00:04:48.391 END TEST event_perf 00:04:48.391 ************************************ 00:04:48.391 21:26:39 event -- common/autotest_common.sh@1142 -- # return 0 00:04:48.391 21:26:39 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:48.391 21:26:39 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:48.391 21:26:39 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.391 21:26:39 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.391 ************************************ 00:04:48.391 START TEST event_reactor 00:04:48.391 ************************************ 00:04:48.391 21:26:39 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:48.391 [2024-07-15 21:26:39.173011] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:04:48.391 [2024-07-15 21:26:39.173080] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid258090 ] 00:04:48.649 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.649 [2024-07-15 21:26:39.227922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.649 [2024-07-15 21:26:39.328354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.020 test_start 00:04:50.020 oneshot 00:04:50.020 tick 100 00:04:50.020 tick 100 00:04:50.020 tick 250 00:04:50.020 tick 100 00:04:50.020 tick 100 00:04:50.020 tick 100 00:04:50.020 tick 250 00:04:50.020 tick 500 00:04:50.020 tick 100 00:04:50.020 tick 100 00:04:50.020 tick 250 00:04:50.020 tick 100 00:04:50.020 tick 100 00:04:50.020 test_end 00:04:50.020 00:04:50.020 real 0m1.266s 00:04:50.020 user 0m1.191s 00:04:50.020 sys 0m0.070s 00:04:50.020 21:26:40 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.020 21:26:40 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:50.020 ************************************ 00:04:50.020 END TEST event_reactor 00:04:50.020 ************************************ 00:04:50.020 21:26:40 event -- common/autotest_common.sh@1142 -- # return 0 00:04:50.020 21:26:40 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:50.020 21:26:40 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:50.020 21:26:40 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.020 21:26:40 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.020 ************************************ 00:04:50.020 START TEST event_reactor_perf 00:04:50.020 ************************************ 00:04:50.020 21:26:40 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:50.020 [2024-07-15 21:26:40.498115] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:04:50.020 [2024-07-15 21:26:40.498211] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid258212 ] 00:04:50.020 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.020 [2024-07-15 21:26:40.550489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.020 [2024-07-15 21:26:40.649232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.954 test_start 00:04:50.954 test_end 00:04:50.954 Performance: 422190 events per second 00:04:50.954 00:04:50.954 real 0m1.257s 00:04:50.954 user 0m1.181s 00:04:50.954 sys 0m0.071s 00:04:50.954 21:26:41 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.955 21:26:41 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:50.955 ************************************ 00:04:50.955 END TEST event_reactor_perf 00:04:50.955 ************************************ 00:04:51.214 21:26:41 event -- common/autotest_common.sh@1142 -- # return 0 00:04:51.214 21:26:41 event -- event/event.sh@49 -- # uname -s 00:04:51.214 21:26:41 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:51.214 21:26:41 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:51.214 21:26:41 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.214 21:26:41 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.214 21:26:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.214 ************************************ 00:04:51.214 START TEST event_scheduler 00:04:51.214 ************************************ 00:04:51.214 21:26:41 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:51.214 * Looking for test storage... 00:04:51.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:51.214 21:26:41 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:51.214 21:26:41 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=258447 00:04:51.214 21:26:41 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.214 21:26:41 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:51.214 21:26:41 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 258447 00:04:51.214 21:26:41 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 258447 ']' 00:04:51.214 21:26:41 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.214 21:26:41 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.214 21:26:41 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.214 21:26:41 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.214 21:26:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.214 [2024-07-15 21:26:41.899547] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:04:51.214 [2024-07-15 21:26:41.899636] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid258447 ] 00:04:51.214 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.214 [2024-07-15 21:26:41.967382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:51.472 [2024-07-15 21:26:42.097163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.472 [2024-07-15 21:26:42.097198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.472 [2024-07-15 21:26:42.097258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:51.472 [2024-07-15 21:26:42.097290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:51.472 21:26:42 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.472 21:26:42 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:51.472 21:26:42 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:51.472 21:26:42 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.472 21:26:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.472 [2024-07-15 21:26:42.178209] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:51.472 [2024-07-15 21:26:42.178243] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:51.472 [2024-07-15 21:26:42.178263] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:51.472 [2024-07-15 21:26:42.178283] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:51.472 [2024-07-15 21:26:42.178295] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:51.472 21:26:42 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.472 21:26:42 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:51.472 21:26:42 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.472 21:26:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.731 [2024-07-15 21:26:42.269219] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:51.731 21:26:42 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.731 21:26:42 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:51.731 21:26:42 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.731 21:26:42 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.731 21:26:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.731 ************************************ 00:04:51.731 START TEST scheduler_create_thread 00:04:51.731 ************************************ 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.731 2 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.731 3 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.731 4 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.731 5 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.731 6 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.731 7 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.731 8 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.731 9 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.731 10 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.731 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.732 21:26:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:51.732 21:26:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:51.732 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.732 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.298 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.298 00:04:52.298 real 0m0.591s 00:04:52.298 user 0m0.009s 00:04:52.298 sys 0m0.005s 00:04:52.298 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.298 21:26:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.298 ************************************ 00:04:52.298 END TEST scheduler_create_thread 00:04:52.298 ************************************ 00:04:52.298 21:26:42 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:52.298 21:26:42 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:52.298 21:26:42 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 258447 00:04:52.298 21:26:42 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 258447 ']' 00:04:52.298 21:26:42 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 258447 00:04:52.298 21:26:42 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:52.298 21:26:42 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:52.298 21:26:42 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 258447 00:04:52.298 21:26:42 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:52.298 21:26:42 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:52.298 21:26:42 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 258447' 00:04:52.298 killing process with pid 258447 00:04:52.298 21:26:42 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 258447 00:04:52.298 21:26:42 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 258447 00:04:52.863 [2024-07-15 21:26:43.369725] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:52.863 00:04:52.863 real 0m1.753s 00:04:52.863 user 0m2.263s 00:04:52.863 sys 0m0.334s 00:04:52.863 21:26:43 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.863 21:26:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:52.863 ************************************ 00:04:52.863 END TEST event_scheduler 00:04:52.863 ************************************ 00:04:52.863 21:26:43 event -- common/autotest_common.sh@1142 -- # return 0 00:04:52.863 21:26:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:52.863 21:26:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:52.863 21:26:43 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.863 21:26:43 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.863 21:26:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.863 ************************************ 00:04:52.863 START TEST app_repeat 00:04:52.863 ************************************ 00:04:52.863 21:26:43 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:52.863 21:26:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.863 21:26:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.863 21:26:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:52.863 21:26:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.863 21:26:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:52.863 21:26:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:52.863 21:26:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:52.863 21:26:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=258615 00:04:52.863 21:26:43 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:52.863 21:26:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.863 21:26:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 258615' 00:04:52.863 Process app_repeat pid: 258615 00:04:52.863 21:26:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:52.863 21:26:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:52.863 spdk_app_start Round 0 00:04:52.863 21:26:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 258615 /var/tmp/spdk-nbd.sock 00:04:52.863 21:26:43 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 258615 ']' 00:04:52.863 21:26:43 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:52.863 21:26:43 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.863 21:26:43 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:52.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:52.863 21:26:43 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.863 21:26:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:52.863 [2024-07-15 21:26:43.634012] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:04:52.863 [2024-07-15 21:26:43.634096] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid258615 ] 00:04:53.120 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.120 [2024-07-15 21:26:43.700924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.120 [2024-07-15 21:26:43.830205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.120 [2024-07-15 21:26:43.830213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.377 21:26:43 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:53.377 21:26:43 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:53.377 21:26:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.634 Malloc0 00:04:53.634 21:26:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.892 Malloc1 00:04:53.892 21:26:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.892 21:26:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.892 21:26:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.892 21:26:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:53.892 21:26:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.892 21:26:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:53.892 21:26:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.892 21:26:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.892 21:26:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.892 21:26:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:53.892 21:26:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.892 21:26:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:53.892 21:26:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:53.892 21:26:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:53.892 21:26:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.892 21:26:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:54.150 /dev/nbd0 00:04:54.150 21:26:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:54.150 21:26:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:54.150 21:26:44 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:54.150 21:26:44 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:54.150 21:26:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:54.150 21:26:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:54.150 21:26:44 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:54.150 21:26:44 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:54.150 21:26:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:54.150 21:26:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:54.150 21:26:44 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.150 1+0 records in 00:04:54.150 1+0 records out 00:04:54.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000148653 s, 27.6 MB/s 00:04:54.150 21:26:44 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.150 21:26:44 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:54.150 21:26:44 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.150 21:26:44 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:54.150 21:26:44 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:54.150 21:26:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.150 21:26:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.150 21:26:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:54.408 /dev/nbd1 00:04:54.665 21:26:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:54.665 21:26:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:54.665 21:26:45 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:54.665 21:26:45 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:54.665 21:26:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:54.665 21:26:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:54.665 21:26:45 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:54.665 21:26:45 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:54.665 21:26:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:54.665 21:26:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:54.665 21:26:45 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.665 1+0 records in 00:04:54.665 1+0 records out 00:04:54.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182361 s, 22.5 MB/s 00:04:54.666 21:26:45 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.666 21:26:45 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:54.666 21:26:45 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.666 21:26:45 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:54.666 21:26:45 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:54.666 21:26:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.666 21:26:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.666 21:26:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.666 21:26:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.666 21:26:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:54.923 { 00:04:54.923 "nbd_device": "/dev/nbd0", 00:04:54.923 "bdev_name": "Malloc0" 00:04:54.923 }, 00:04:54.923 { 00:04:54.923 "nbd_device": "/dev/nbd1", 00:04:54.923 "bdev_name": "Malloc1" 00:04:54.923 } 00:04:54.923 ]' 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:54.923 { 00:04:54.923 "nbd_device": "/dev/nbd0", 00:04:54.923 "bdev_name": "Malloc0" 00:04:54.923 }, 00:04:54.923 { 00:04:54.923 "nbd_device": "/dev/nbd1", 00:04:54.923 "bdev_name": "Malloc1" 00:04:54.923 } 00:04:54.923 ]' 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:54.923 /dev/nbd1' 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:54.923 /dev/nbd1' 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:54.923 256+0 records in 00:04:54.923 256+0 records out 00:04:54.923 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00594472 s, 176 MB/s 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:54.923 256+0 records in 00:04:54.923 256+0 records out 00:04:54.923 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204917 s, 51.2 MB/s 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:54.923 256+0 records in 00:04:54.923 256+0 records out 00:04:54.923 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255601 s, 41.0 MB/s 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:54.923 21:26:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.924 21:26:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:54.924 21:26:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:54.924 21:26:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.924 21:26:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:54.924 21:26:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.924 21:26:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:54.924 21:26:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.924 21:26:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:54.924 21:26:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.924 21:26:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.924 21:26:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:54.924 21:26:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:54.924 21:26:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.924 21:26:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:55.181 21:26:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:55.181 21:26:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:55.181 21:26:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:55.181 21:26:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.181 21:26:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.181 21:26:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:55.439 21:26:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.439 21:26:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.439 21:26:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.439 21:26:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:55.697 21:26:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:55.697 21:26:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:55.697 21:26:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:55.697 21:26:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.697 21:26:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.697 21:26:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:55.697 21:26:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.697 21:26:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.697 21:26:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.697 21:26:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.697 21:26:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.954 21:26:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:55.954 21:26:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:55.954 21:26:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.954 21:26:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:55.954 21:26:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:55.954 21:26:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.954 21:26:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:55.954 21:26:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:55.954 21:26:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:55.954 21:26:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:55.954 21:26:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:55.954 21:26:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:55.954 21:26:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:56.212 21:26:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:56.470 [2024-07-15 21:26:47.114283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.470 [2024-07-15 21:26:47.213187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.470 [2024-07-15 21:26:47.213197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.470 [2024-07-15 21:26:47.257366] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:56.471 [2024-07-15 21:26:47.257425] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:59.749 21:26:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:59.749 21:26:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:59.749 spdk_app_start Round 1 00:04:59.749 21:26:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 258615 /var/tmp/spdk-nbd.sock 00:04:59.749 21:26:49 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 258615 ']' 00:04:59.749 21:26:49 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:59.749 21:26:49 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.749 21:26:49 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:59.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:59.749 21:26:49 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.749 21:26:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.749 21:26:50 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.749 21:26:50 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:59.749 21:26:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.749 Malloc0 00:05:00.006 21:26:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.271 Malloc1 00:05:00.271 21:26:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.271 21:26:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.271 21:26:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.271 21:26:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:00.271 21:26:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.271 21:26:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:00.271 21:26:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.271 21:26:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.271 21:26:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.271 21:26:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:00.271 21:26:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.271 21:26:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:00.271 21:26:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:00.271 21:26:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:00.271 21:26:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.271 21:26:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:00.572 /dev/nbd0 00:05:00.572 21:26:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:00.572 21:26:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:00.572 21:26:51 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:00.572 21:26:51 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:00.572 21:26:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:00.572 21:26:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:00.572 21:26:51 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:00.572 21:26:51 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:00.572 21:26:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:00.572 21:26:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:00.572 21:26:51 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.572 1+0 records in 00:05:00.572 1+0 records out 00:05:00.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183462 s, 22.3 MB/s 00:05:00.572 21:26:51 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.572 21:26:51 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:00.572 21:26:51 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.572 21:26:51 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:00.572 21:26:51 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:00.572 21:26:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.572 21:26:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.572 21:26:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:00.873 /dev/nbd1 00:05:00.873 21:26:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:00.873 21:26:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:00.873 21:26:51 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:00.873 21:26:51 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:00.873 21:26:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:00.873 21:26:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:00.873 21:26:51 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:00.873 21:26:51 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:00.873 21:26:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:00.873 21:26:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:00.873 21:26:51 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.873 1+0 records in 00:05:00.873 1+0 records out 00:05:00.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220835 s, 18.5 MB/s 00:05:00.873 21:26:51 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.873 21:26:51 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:00.873 21:26:51 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.873 21:26:51 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:00.873 21:26:51 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:00.873 21:26:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.873 21:26:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.873 21:26:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.873 21:26:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.873 21:26:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:01.136 { 00:05:01.136 "nbd_device": "/dev/nbd0", 00:05:01.136 "bdev_name": "Malloc0" 00:05:01.136 }, 00:05:01.136 { 00:05:01.136 "nbd_device": "/dev/nbd1", 00:05:01.136 "bdev_name": "Malloc1" 00:05:01.136 } 00:05:01.136 ]' 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:01.136 { 00:05:01.136 "nbd_device": "/dev/nbd0", 00:05:01.136 "bdev_name": "Malloc0" 00:05:01.136 }, 00:05:01.136 { 00:05:01.136 "nbd_device": "/dev/nbd1", 00:05:01.136 "bdev_name": "Malloc1" 00:05:01.136 } 00:05:01.136 ]' 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:01.136 /dev/nbd1' 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:01.136 /dev/nbd1' 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:01.136 256+0 records in 00:05:01.136 256+0 records out 00:05:01.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00532546 s, 197 MB/s 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:01.136 256+0 records in 00:05:01.136 256+0 records out 00:05:01.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237946 s, 44.1 MB/s 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:01.136 256+0 records in 00:05:01.136 256+0 records out 00:05:01.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234999 s, 44.6 MB/s 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.136 21:26:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:01.701 21:26:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:01.701 21:26:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:01.701 21:26:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:01.701 21:26:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.701 21:26:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.701 21:26:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:01.701 21:26:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.701 21:26:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.701 21:26:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.701 21:26:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.958 21:26:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.958 21:26:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.958 21:26:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.958 21:26:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.958 21:26:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.958 21:26:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.958 21:26:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.958 21:26:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.958 21:26:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.958 21:26:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.958 21:26:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.215 21:26:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:02.215 21:26:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:02.215 21:26:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.215 21:26:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:02.215 21:26:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:02.215 21:26:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.215 21:26:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:02.215 21:26:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:02.215 21:26:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:02.215 21:26:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:02.215 21:26:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:02.215 21:26:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:02.215 21:26:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:02.474 21:26:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:02.731 [2024-07-15 21:26:53.343570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.731 [2024-07-15 21:26:53.443220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.731 [2024-07-15 21:26:53.443273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.731 [2024-07-15 21:26:53.489758] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:02.731 [2024-07-15 21:26:53.489820] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:06.006 21:26:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:06.006 21:26:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:06.006 spdk_app_start Round 2 00:05:06.006 21:26:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 258615 /var/tmp/spdk-nbd.sock 00:05:06.006 21:26:56 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 258615 ']' 00:05:06.006 21:26:56 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:06.006 21:26:56 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:06.006 21:26:56 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:06.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:06.006 21:26:56 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:06.006 21:26:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:06.006 21:26:56 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:06.006 21:26:56 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:06.006 21:26:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.006 Malloc0 00:05:06.006 21:26:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.571 Malloc1 00:05:06.571 21:26:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.571 21:26:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.571 21:26:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.571 21:26:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:06.571 21:26:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.571 21:26:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:06.571 21:26:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.571 21:26:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.571 21:26:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.571 21:26:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:06.571 21:26:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.571 21:26:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:06.571 21:26:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:06.571 21:26:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:06.571 21:26:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.571 21:26:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:06.571 /dev/nbd0 00:05:06.571 21:26:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:06.571 21:26:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:06.571 21:26:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:06.571 21:26:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:06.571 21:26:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:06.571 21:26:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:06.571 21:26:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:06.571 21:26:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:06.571 21:26:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:06.571 21:26:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:06.571 21:26:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.571 1+0 records in 00:05:06.571 1+0 records out 00:05:06.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000147968 s, 27.7 MB/s 00:05:06.571 21:26:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.571 21:26:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:06.571 21:26:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.571 21:26:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:06.571 21:26:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:06.571 21:26:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.571 21:26:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.571 21:26:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:06.828 /dev/nbd1 00:05:06.828 21:26:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:06.828 21:26:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:06.828 21:26:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:06.828 21:26:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:06.828 21:26:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:06.828 21:26:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:06.828 21:26:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:06.828 21:26:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:06.828 21:26:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:06.828 21:26:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:06.828 21:26:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.828 1+0 records in 00:05:06.828 1+0 records out 00:05:06.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220764 s, 18.6 MB/s 00:05:07.086 21:26:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:07.086 21:26:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:07.086 21:26:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:07.086 21:26:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:07.086 21:26:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:07.086 21:26:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.086 21:26:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.086 21:26:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.086 21:26:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.086 21:26:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.086 21:26:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:07.086 { 00:05:07.086 "nbd_device": "/dev/nbd0", 00:05:07.086 "bdev_name": "Malloc0" 00:05:07.086 }, 00:05:07.086 { 00:05:07.086 "nbd_device": "/dev/nbd1", 00:05:07.086 "bdev_name": "Malloc1" 00:05:07.086 } 00:05:07.086 ]' 00:05:07.086 21:26:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:07.086 { 00:05:07.086 "nbd_device": "/dev/nbd0", 00:05:07.086 "bdev_name": "Malloc0" 00:05:07.086 }, 00:05:07.086 { 00:05:07.086 "nbd_device": "/dev/nbd1", 00:05:07.086 "bdev_name": "Malloc1" 00:05:07.086 } 00:05:07.086 ]' 00:05:07.086 21:26:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.343 21:26:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:07.343 /dev/nbd1' 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:07.344 /dev/nbd1' 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:07.344 256+0 records in 00:05:07.344 256+0 records out 00:05:07.344 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0058399 s, 180 MB/s 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:07.344 256+0 records in 00:05:07.344 256+0 records out 00:05:07.344 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202866 s, 51.7 MB/s 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:07.344 256+0 records in 00:05:07.344 256+0 records out 00:05:07.344 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241245 s, 43.5 MB/s 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.344 21:26:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:07.601 21:26:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:07.601 21:26:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:07.601 21:26:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:07.601 21:26:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.601 21:26:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.601 21:26:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:07.601 21:26:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.601 21:26:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.601 21:26:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.601 21:26:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:07.858 21:26:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:07.858 21:26:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:07.858 21:26:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:07.858 21:26:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.858 21:26:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.858 21:26:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:07.858 21:26:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.858 21:26:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.858 21:26:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.858 21:26:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.858 21:26:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.113 21:26:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:08.113 21:26:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:08.113 21:26:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.113 21:26:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:08.113 21:26:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:08.113 21:26:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.113 21:26:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:08.113 21:26:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:08.113 21:26:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:08.113 21:26:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:08.113 21:26:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:08.113 21:26:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:08.113 21:26:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:08.370 21:26:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:08.627 [2024-07-15 21:26:59.235689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.627 [2024-07-15 21:26:59.332021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.627 [2024-07-15 21:26:59.332021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.627 [2024-07-15 21:26:59.377613] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:08.627 [2024-07-15 21:26:59.377669] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:11.899 21:27:02 event.app_repeat -- event/event.sh@38 -- # waitforlisten 258615 /var/tmp/spdk-nbd.sock 00:05:11.899 21:27:02 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 258615 ']' 00:05:11.899 21:27:02 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.899 21:27:02 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.899 21:27:02 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.899 21:27:02 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.899 21:27:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.899 21:27:02 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.899 21:27:02 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:11.899 21:27:02 event.app_repeat -- event/event.sh@39 -- # killprocess 258615 00:05:11.899 21:27:02 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 258615 ']' 00:05:11.899 21:27:02 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 258615 00:05:11.899 21:27:02 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:11.899 21:27:02 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:11.899 21:27:02 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 258615 00:05:11.899 21:27:02 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:11.899 21:27:02 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:11.899 21:27:02 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 258615' 00:05:11.899 killing process with pid 258615 00:05:11.899 21:27:02 event.app_repeat -- common/autotest_common.sh@967 -- # kill 258615 00:05:11.899 21:27:02 event.app_repeat -- common/autotest_common.sh@972 -- # wait 258615 00:05:11.899 spdk_app_start is called in Round 0. 00:05:11.899 Shutdown signal received, stop current app iteration 00:05:11.899 Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 reinitialization... 00:05:11.899 spdk_app_start is called in Round 1. 00:05:11.899 Shutdown signal received, stop current app iteration 00:05:11.899 Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 reinitialization... 00:05:11.899 spdk_app_start is called in Round 2. 00:05:11.899 Shutdown signal received, stop current app iteration 00:05:11.899 Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 reinitialization... 00:05:11.899 spdk_app_start is called in Round 3. 00:05:11.899 Shutdown signal received, stop current app iteration 00:05:11.899 21:27:02 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:11.899 21:27:02 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:11.899 00:05:11.899 real 0m18.952s 00:05:11.899 user 0m42.124s 00:05:11.899 sys 0m3.356s 00:05:11.899 21:27:02 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.899 21:27:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.899 ************************************ 00:05:11.899 END TEST app_repeat 00:05:11.899 ************************************ 00:05:11.899 21:27:02 event -- common/autotest_common.sh@1142 -- # return 0 00:05:11.899 21:27:02 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:11.899 21:27:02 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:11.899 21:27:02 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.899 21:27:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.899 21:27:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.899 ************************************ 00:05:11.899 START TEST cpu_locks 00:05:11.899 ************************************ 00:05:11.899 21:27:02 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:11.899 * Looking for test storage... 00:05:11.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:11.899 21:27:02 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:11.899 21:27:02 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:11.899 21:27:02 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:11.899 21:27:02 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:11.899 21:27:02 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.899 21:27:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.899 21:27:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.156 ************************************ 00:05:12.156 START TEST default_locks 00:05:12.156 ************************************ 00:05:12.156 21:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:12.156 21:27:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=260765 00:05:12.156 21:27:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.156 21:27:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 260765 00:05:12.156 21:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 260765 ']' 00:05:12.156 21:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.156 21:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.156 21:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.156 21:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.156 21:27:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.156 [2024-07-15 21:27:02.744153] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:12.156 [2024-07-15 21:27:02.744232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid260765 ] 00:05:12.156 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.156 [2024-07-15 21:27:02.793029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.156 [2024-07-15 21:27:02.892295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.413 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.413 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:12.413 21:27:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 260765 00:05:12.413 21:27:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.413 21:27:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 260765 00:05:12.984 lslocks: write error 00:05:12.984 21:27:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 260765 00:05:12.984 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 260765 ']' 00:05:12.985 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 260765 00:05:12.985 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:12.985 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.985 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 260765 00:05:12.985 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.985 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.985 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 260765' 00:05:12.985 killing process with pid 260765 00:05:12.985 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 260765 00:05:12.985 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 260765 00:05:12.985 21:27:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 260765 00:05:12.985 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:12.985 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 260765 00:05:12.985 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:13.243 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:13.243 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:13.243 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:13.243 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 260765 00:05:13.243 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 260765 ']' 00:05:13.243 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.243 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.243 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.243 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.243 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (260765) - No such process 00:05:13.243 ERROR: process (pid: 260765) is no longer running 00:05:13.243 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.243 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:13.243 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:13.243 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:13.243 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:13.243 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:13.243 21:27:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:13.243 21:27:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:13.243 21:27:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:13.243 21:27:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:13.243 00:05:13.243 real 0m1.086s 00:05:13.243 user 0m1.124s 00:05:13.243 sys 0m0.503s 00:05:13.243 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.243 21:27:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.243 ************************************ 00:05:13.243 END TEST default_locks 00:05:13.243 ************************************ 00:05:13.243 21:27:03 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:13.243 21:27:03 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:13.243 21:27:03 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.243 21:27:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.243 21:27:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.243 ************************************ 00:05:13.243 START TEST default_locks_via_rpc 00:05:13.243 ************************************ 00:05:13.243 21:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:13.243 21:27:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=260901 00:05:13.243 21:27:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.243 21:27:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 260901 00:05:13.243 21:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 260901 ']' 00:05:13.243 21:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.243 21:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.243 21:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.243 21:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.243 21:27:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.243 [2024-07-15 21:27:03.875983] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:13.243 [2024-07-15 21:27:03.876074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid260901 ] 00:05:13.243 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.243 [2024-07-15 21:27:03.924421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.243 [2024-07-15 21:27:04.018002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.500 21:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.500 21:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:13.500 21:27:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:13.500 21:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.500 21:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.500 21:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.500 21:27:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:13.500 21:27:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:13.501 21:27:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:13.501 21:27:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:13.501 21:27:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:13.501 21:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.501 21:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.501 21:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.501 21:27:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 260901 00:05:13.501 21:27:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 260901 00:05:13.501 21:27:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:14.066 21:27:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 260901 00:05:14.066 21:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 260901 ']' 00:05:14.066 21:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 260901 00:05:14.066 21:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:14.066 21:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:14.066 21:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 260901 00:05:14.066 21:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:14.066 21:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:14.066 21:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 260901' 00:05:14.066 killing process with pid 260901 00:05:14.066 21:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 260901 00:05:14.066 21:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 260901 00:05:14.324 00:05:14.324 real 0m1.073s 00:05:14.324 user 0m1.130s 00:05:14.324 sys 0m0.479s 00:05:14.324 21:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.324 21:27:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.324 ************************************ 00:05:14.324 END TEST default_locks_via_rpc 00:05:14.324 ************************************ 00:05:14.324 21:27:04 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:14.324 21:27:04 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:14.324 21:27:04 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.324 21:27:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.324 21:27:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.324 ************************************ 00:05:14.324 START TEST non_locking_app_on_locked_coremask 00:05:14.324 ************************************ 00:05:14.324 21:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:14.324 21:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=261031 00:05:14.324 21:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.324 21:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 261031 /var/tmp/spdk.sock 00:05:14.324 21:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 261031 ']' 00:05:14.324 21:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.324 21:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.324 21:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.324 21:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.324 21:27:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.324 [2024-07-15 21:27:05.001389] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:14.324 [2024-07-15 21:27:05.001483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid261031 ] 00:05:14.324 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.324 [2024-07-15 21:27:05.050252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.582 [2024-07-15 21:27:05.154253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.582 21:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.582 21:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:14.582 21:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=261035 00:05:14.582 21:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 261035 /var/tmp/spdk2.sock 00:05:14.582 21:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 261035 ']' 00:05:14.582 21:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.582 21:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.582 21:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.582 21:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:14.582 21:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.582 21:27:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.839 [2024-07-15 21:27:05.422708] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:14.839 [2024-07-15 21:27:05.422812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid261035 ] 00:05:14.839 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.839 [2024-07-15 21:27:05.498004] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:14.839 [2024-07-15 21:27:05.498034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.096 [2024-07-15 21:27:05.690765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.661 21:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.661 21:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:15.661 21:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 261031 00:05:15.661 21:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 261031 00:05:15.661 21:27:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.595 lslocks: write error 00:05:16.595 21:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 261031 00:05:16.595 21:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 261031 ']' 00:05:16.595 21:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 261031 00:05:16.595 21:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:16.595 21:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.595 21:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 261031 00:05:16.595 21:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.595 21:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.595 21:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 261031' 00:05:16.595 killing process with pid 261031 00:05:16.595 21:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 261031 00:05:16.595 21:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 261031 00:05:17.161 21:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 261035 00:05:17.161 21:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 261035 ']' 00:05:17.161 21:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 261035 00:05:17.161 21:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:17.161 21:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.161 21:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 261035 00:05:17.161 21:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.161 21:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.161 21:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 261035' 00:05:17.161 killing process with pid 261035 00:05:17.161 21:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 261035 00:05:17.161 21:27:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 261035 00:05:17.420 00:05:17.420 real 0m3.058s 00:05:17.420 user 0m3.409s 00:05:17.420 sys 0m1.026s 00:05:17.420 21:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.420 21:27:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.420 ************************************ 00:05:17.420 END TEST non_locking_app_on_locked_coremask 00:05:17.420 ************************************ 00:05:17.420 21:27:08 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:17.420 21:27:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:17.420 21:27:08 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.420 21:27:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.420 21:27:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.420 ************************************ 00:05:17.420 START TEST locking_app_on_unlocked_coremask 00:05:17.420 ************************************ 00:05:17.420 21:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:17.420 21:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=261285 00:05:17.420 21:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:17.420 21:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 261285 /var/tmp/spdk.sock 00:05:17.420 21:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 261285 ']' 00:05:17.420 21:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.420 21:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.420 21:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.420 21:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.420 21:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.420 [2024-07-15 21:27:08.123025] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:17.420 [2024-07-15 21:27:08.123121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid261285 ] 00:05:17.420 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.420 [2024-07-15 21:27:08.182789] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:17.420 [2024-07-15 21:27:08.182838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.678 [2024-07-15 21:27:08.301926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.936 21:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.936 21:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:17.936 21:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=261547 00:05:17.936 21:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 261547 /var/tmp/spdk2.sock 00:05:17.936 21:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:17.936 21:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 261547 ']' 00:05:17.936 21:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.936 21:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.936 21:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.936 21:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.936 21:27:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.936 [2024-07-15 21:27:08.590235] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:17.937 [2024-07-15 21:27:08.590324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid261547 ] 00:05:17.937 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.937 [2024-07-15 21:27:08.679288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.195 [2024-07-15 21:27:08.914374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.128 21:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.128 21:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:19.128 21:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 261547 00:05:19.128 21:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 261547 00:05:19.128 21:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.695 lslocks: write error 00:05:19.695 21:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 261285 00:05:19.695 21:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 261285 ']' 00:05:19.695 21:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 261285 00:05:19.695 21:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:19.695 21:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.695 21:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 261285 00:05:19.695 21:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.695 21:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.695 21:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 261285' 00:05:19.695 killing process with pid 261285 00:05:19.695 21:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 261285 00:05:19.695 21:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 261285 00:05:20.262 21:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 261547 00:05:20.262 21:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 261547 ']' 00:05:20.262 21:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 261547 00:05:20.262 21:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:20.262 21:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:20.262 21:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 261547 00:05:20.262 21:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:20.262 21:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:20.262 21:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 261547' 00:05:20.262 killing process with pid 261547 00:05:20.262 21:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 261547 00:05:20.262 21:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 261547 00:05:20.521 00:05:20.521 real 0m3.142s 00:05:20.521 user 0m3.514s 00:05:20.521 sys 0m1.031s 00:05:20.521 21:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.521 21:27:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.521 ************************************ 00:05:20.521 END TEST locking_app_on_unlocked_coremask 00:05:20.521 ************************************ 00:05:20.521 21:27:11 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:20.521 21:27:11 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:20.521 21:27:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.521 21:27:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.521 21:27:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.521 ************************************ 00:05:20.521 START TEST locking_app_on_locked_coremask 00:05:20.521 ************************************ 00:05:20.521 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:20.521 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=262131 00:05:20.521 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.521 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 262131 /var/tmp/spdk.sock 00:05:20.521 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 262131 ']' 00:05:20.521 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.521 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.521 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.521 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.521 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.781 [2024-07-15 21:27:11.315241] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:20.781 [2024-07-15 21:27:11.315315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid262131 ] 00:05:20.781 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.781 [2024-07-15 21:27:11.365218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.781 [2024-07-15 21:27:11.466646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.039 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.039 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:21.039 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=262223 00:05:21.039 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 262223 /var/tmp/spdk2.sock 00:05:21.039 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:21.039 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 262223 /var/tmp/spdk2.sock 00:05:21.039 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:21.039 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:21.039 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.039 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:21.039 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.039 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 262223 /var/tmp/spdk2.sock 00:05:21.039 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 262223 ']' 00:05:21.039 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.039 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.039 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.039 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.039 21:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.039 [2024-07-15 21:27:11.729414] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:21.039 [2024-07-15 21:27:11.729515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid262223 ] 00:05:21.039 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.039 [2024-07-15 21:27:11.808059] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 262131 has claimed it. 00:05:21.039 [2024-07-15 21:27:11.808126] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:21.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (262223) - No such process 00:05:21.974 ERROR: process (pid: 262223) is no longer running 00:05:21.974 21:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.974 21:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:21.974 21:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:21.974 21:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:21.974 21:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:21.974 21:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:21.974 21:27:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 262131 00:05:21.974 21:27:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 262131 00:05:21.974 21:27:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.974 lslocks: write error 00:05:21.974 21:27:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 262131 00:05:21.974 21:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 262131 ']' 00:05:21.974 21:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 262131 00:05:21.974 21:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:21.974 21:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.974 21:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 262131 00:05:21.974 21:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:22.233 21:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:22.233 21:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 262131' 00:05:22.233 killing process with pid 262131 00:05:22.233 21:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 262131 00:05:22.233 21:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 262131 00:05:22.491 00:05:22.491 real 0m1.784s 00:05:22.491 user 0m2.055s 00:05:22.491 sys 0m0.587s 00:05:22.491 21:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.491 21:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.491 ************************************ 00:05:22.491 END TEST locking_app_on_locked_coremask 00:05:22.491 ************************************ 00:05:22.491 21:27:13 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:22.491 21:27:13 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:22.491 21:27:13 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.491 21:27:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.491 21:27:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.491 ************************************ 00:05:22.491 START TEST locking_overlapped_coremask 00:05:22.491 ************************************ 00:05:22.491 21:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:22.491 21:27:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=262359 00:05:22.491 21:27:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:22.491 21:27:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 262359 /var/tmp/spdk.sock 00:05:22.491 21:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 262359 ']' 00:05:22.491 21:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.491 21:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.491 21:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.491 21:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.491 21:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.491 [2024-07-15 21:27:13.165645] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:22.491 [2024-07-15 21:27:13.165743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid262359 ] 00:05:22.491 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.491 [2024-07-15 21:27:13.224531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:22.748 [2024-07-15 21:27:13.322703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.748 [2024-07-15 21:27:13.322746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.748 [2024-07-15 21:27:13.322749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.748 21:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.748 21:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:22.748 21:27:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=262375 00:05:22.748 21:27:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:22.748 21:27:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 262375 /var/tmp/spdk2.sock 00:05:22.748 21:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:22.749 21:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 262375 /var/tmp/spdk2.sock 00:05:22.749 21:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:23.006 21:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.006 21:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:23.006 21:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.006 21:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 262375 /var/tmp/spdk2.sock 00:05:23.006 21:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 262375 ']' 00:05:23.006 21:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.006 21:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.006 21:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.006 21:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.006 21:27:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.006 [2024-07-15 21:27:13.596817] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:23.006 [2024-07-15 21:27:13.596902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid262375 ] 00:05:23.006 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.006 [2024-07-15 21:27:13.676937] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 262359 has claimed it. 00:05:23.006 [2024-07-15 21:27:13.676989] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:23.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (262375) - No such process 00:05:23.568 ERROR: process (pid: 262375) is no longer running 00:05:23.568 21:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.568 21:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:23.568 21:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:23.568 21:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.568 21:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:23.568 21:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.568 21:27:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:23.568 21:27:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:23.568 21:27:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:23.568 21:27:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:23.568 21:27:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 262359 00:05:23.568 21:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 262359 ']' 00:05:23.568 21:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 262359 00:05:23.568 21:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:23.569 21:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.569 21:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 262359 00:05:23.569 21:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.569 21:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.569 21:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 262359' 00:05:23.569 killing process with pid 262359 00:05:23.569 21:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 262359 00:05:23.569 21:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 262359 00:05:24.136 00:05:24.136 real 0m1.542s 00:05:24.136 user 0m4.238s 00:05:24.136 sys 0m0.382s 00:05:24.136 21:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.136 21:27:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.136 ************************************ 00:05:24.136 END TEST locking_overlapped_coremask 00:05:24.136 ************************************ 00:05:24.136 21:27:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:24.136 21:27:14 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:24.136 21:27:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.136 21:27:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.136 21:27:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.136 ************************************ 00:05:24.136 START TEST locking_overlapped_coremask_via_rpc 00:05:24.136 ************************************ 00:05:24.136 21:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:24.136 21:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=262565 00:05:24.136 21:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:24.136 21:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 262565 /var/tmp/spdk.sock 00:05:24.136 21:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 262565 ']' 00:05:24.136 21:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.136 21:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.136 21:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.136 21:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.136 21:27:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.136 [2024-07-15 21:27:14.754616] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:24.136 [2024-07-15 21:27:14.754713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid262565 ] 00:05:24.136 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.136 [2024-07-15 21:27:14.803995] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:24.136 [2024-07-15 21:27:14.804031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:24.136 [2024-07-15 21:27:14.905144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.136 [2024-07-15 21:27:14.905193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.136 [2024-07-15 21:27:14.905217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.394 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.394 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:24.394 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=262600 00:05:24.394 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 262600 /var/tmp/spdk2.sock 00:05:24.394 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 262600 ']' 00:05:24.394 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.394 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.394 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:24.394 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.394 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.394 21:27:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.394 [2024-07-15 21:27:15.179053] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:24.394 [2024-07-15 21:27:15.179217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid262600 ] 00:05:24.652 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.652 [2024-07-15 21:27:15.259997] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:24.652 [2024-07-15 21:27:15.260029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:24.909 [2024-07-15 21:27:15.463018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:24.909 [2024-07-15 21:27:15.466187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:24.909 [2024-07-15 21:27:15.466190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.475 [2024-07-15 21:27:16.228248] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 262565 has claimed it. 00:05:25.475 request: 00:05:25.475 { 00:05:25.475 "method": "framework_enable_cpumask_locks", 00:05:25.475 "req_id": 1 00:05:25.475 } 00:05:25.475 Got JSON-RPC error response 00:05:25.475 response: 00:05:25.475 { 00:05:25.475 "code": -32603, 00:05:25.475 "message": "Failed to claim CPU core: 2" 00:05:25.475 } 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 262565 /var/tmp/spdk.sock 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 262565 ']' 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.475 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.040 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.040 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:26.040 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 262600 /var/tmp/spdk2.sock 00:05:26.040 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 262600 ']' 00:05:26.040 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.040 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.040 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.040 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.040 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.298 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.298 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:26.298 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:26.298 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:26.298 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:26.298 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:26.298 00:05:26.298 real 0m2.138s 00:05:26.298 user 0m1.209s 00:05:26.298 sys 0m0.185s 00:05:26.298 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.298 21:27:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.298 ************************************ 00:05:26.298 END TEST locking_overlapped_coremask_via_rpc 00:05:26.298 ************************************ 00:05:26.298 21:27:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:26.298 21:27:16 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:26.298 21:27:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 262565 ]] 00:05:26.298 21:27:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 262565 00:05:26.298 21:27:16 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 262565 ']' 00:05:26.298 21:27:16 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 262565 00:05:26.298 21:27:16 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:26.298 21:27:16 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.298 21:27:16 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 262565 00:05:26.299 21:27:16 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.299 21:27:16 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.299 21:27:16 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 262565' 00:05:26.299 killing process with pid 262565 00:05:26.299 21:27:16 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 262565 00:05:26.299 21:27:16 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 262565 00:05:26.556 21:27:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 262600 ]] 00:05:26.557 21:27:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 262600 00:05:26.557 21:27:17 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 262600 ']' 00:05:26.557 21:27:17 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 262600 00:05:26.557 21:27:17 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:26.557 21:27:17 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.557 21:27:17 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 262600 00:05:26.557 21:27:17 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:26.557 21:27:17 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:26.557 21:27:17 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 262600' 00:05:26.557 killing process with pid 262600 00:05:26.557 21:27:17 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 262600 00:05:26.557 21:27:17 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 262600 00:05:26.814 21:27:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:26.814 21:27:17 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:26.814 21:27:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 262565 ]] 00:05:26.814 21:27:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 262565 00:05:26.814 21:27:17 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 262565 ']' 00:05:26.814 21:27:17 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 262565 00:05:26.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (262565) - No such process 00:05:26.814 21:27:17 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 262565 is not found' 00:05:26.814 Process with pid 262565 is not found 00:05:26.814 21:27:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 262600 ]] 00:05:26.814 21:27:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 262600 00:05:26.814 21:27:17 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 262600 ']' 00:05:26.814 21:27:17 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 262600 00:05:26.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (262600) - No such process 00:05:26.814 21:27:17 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 262600 is not found' 00:05:26.814 Process with pid 262600 is not found 00:05:26.814 21:27:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:26.814 00:05:26.814 real 0m14.909s 00:05:26.814 user 0m27.535s 00:05:26.814 sys 0m5.003s 00:05:26.814 21:27:17 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.814 21:27:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.814 ************************************ 00:05:26.814 END TEST cpu_locks 00:05:26.814 ************************************ 00:05:26.814 21:27:17 event -- common/autotest_common.sh@1142 -- # return 0 00:05:26.814 00:05:26.814 real 0m39.804s 00:05:26.814 user 1m18.643s 00:05:26.814 sys 0m9.163s 00:05:26.814 21:27:17 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.814 21:27:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.814 ************************************ 00:05:26.814 END TEST event 00:05:26.814 ************************************ 00:05:26.814 21:27:17 -- common/autotest_common.sh@1142 -- # return 0 00:05:26.814 21:27:17 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:26.814 21:27:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.814 21:27:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.814 21:27:17 -- common/autotest_common.sh@10 -- # set +x 00:05:26.814 ************************************ 00:05:26.814 START TEST thread 00:05:26.814 ************************************ 00:05:26.814 21:27:17 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:27.072 * Looking for test storage... 00:05:27.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:27.072 21:27:17 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:27.072 21:27:17 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:27.072 21:27:17 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.072 21:27:17 thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.073 ************************************ 00:05:27.073 START TEST thread_poller_perf 00:05:27.073 ************************************ 00:05:27.073 21:27:17 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:27.073 [2024-07-15 21:27:17.694193] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:27.073 [2024-07-15 21:27:17.694260] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid262899 ] 00:05:27.073 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.073 [2024-07-15 21:27:17.744697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.073 [2024-07-15 21:27:17.839041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.073 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:28.443 ====================================== 00:05:28.443 busy:2708888421 (cyc) 00:05:28.443 total_run_count: 330000 00:05:28.443 tsc_hz: 2700000000 (cyc) 00:05:28.443 ====================================== 00:05:28.443 poller_cost: 8208 (cyc), 3040 (nsec) 00:05:28.443 00:05:28.443 real 0m1.258s 00:05:28.443 user 0m1.192s 00:05:28.443 sys 0m0.062s 00:05:28.443 21:27:18 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.443 21:27:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:28.443 ************************************ 00:05:28.443 END TEST thread_poller_perf 00:05:28.443 ************************************ 00:05:28.443 21:27:18 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:28.443 21:27:18 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:28.443 21:27:18 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:28.443 21:27:18 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.443 21:27:18 thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.443 ************************************ 00:05:28.443 START TEST thread_poller_perf 00:05:28.443 ************************************ 00:05:28.443 21:27:18 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:28.443 [2024-07-15 21:27:19.005554] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:28.443 [2024-07-15 21:27:19.005621] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid263028 ] 00:05:28.443 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.443 [2024-07-15 21:27:19.061854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.443 [2024-07-15 21:27:19.166985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.443 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:29.814 ====================================== 00:05:29.814 busy:2702757702 (cyc) 00:05:29.814 total_run_count: 4714000 00:05:29.814 tsc_hz: 2700000000 (cyc) 00:05:29.814 ====================================== 00:05:29.814 poller_cost: 573 (cyc), 212 (nsec) 00:05:29.814 00:05:29.814 real 0m1.273s 00:05:29.814 user 0m1.197s 00:05:29.814 sys 0m0.071s 00:05:29.814 21:27:20 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.814 21:27:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:29.814 ************************************ 00:05:29.814 END TEST thread_poller_perf 00:05:29.814 ************************************ 00:05:29.814 21:27:20 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:29.814 21:27:20 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:29.814 00:05:29.814 real 0m2.690s 00:05:29.814 user 0m2.451s 00:05:29.814 sys 0m0.241s 00:05:29.814 21:27:20 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.814 21:27:20 thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.814 ************************************ 00:05:29.814 END TEST thread 00:05:29.814 ************************************ 00:05:29.814 21:27:20 -- common/autotest_common.sh@1142 -- # return 0 00:05:29.814 21:27:20 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:29.814 21:27:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.814 21:27:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.814 21:27:20 -- common/autotest_common.sh@10 -- # set +x 00:05:29.814 ************************************ 00:05:29.814 START TEST accel 00:05:29.814 ************************************ 00:05:29.814 21:27:20 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:29.814 * Looking for test storage... 00:05:29.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:29.814 21:27:20 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:29.814 21:27:20 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:29.814 21:27:20 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:29.814 21:27:20 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=263279 00:05:29.814 21:27:20 accel -- accel/accel.sh@63 -- # waitforlisten 263279 00:05:29.814 21:27:20 accel -- common/autotest_common.sh@829 -- # '[' -z 263279 ']' 00:05:29.814 21:27:20 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:29.814 21:27:20 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:29.814 21:27:20 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.814 21:27:20 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.814 21:27:20 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.814 21:27:20 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.814 21:27:20 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.814 21:27:20 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.814 21:27:20 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.814 21:27:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:29.814 21:27:20 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.814 21:27:20 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.814 21:27:20 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:29.814 21:27:20 accel -- accel/accel.sh@41 -- # jq -r . 00:05:29.815 [2024-07-15 21:27:20.449441] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:29.815 [2024-07-15 21:27:20.449537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid263279 ] 00:05:29.815 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.815 [2024-07-15 21:27:20.509115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.072 [2024-07-15 21:27:20.625277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.073 21:27:20 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.073 21:27:20 accel -- common/autotest_common.sh@862 -- # return 0 00:05:30.073 21:27:20 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:30.073 21:27:20 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:30.073 21:27:20 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:30.073 21:27:20 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:30.073 21:27:20 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:30.073 21:27:20 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:30.073 21:27:20 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:30.073 21:27:20 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.073 21:27:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.330 21:27:20 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.330 21:27:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.330 21:27:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.330 21:27:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.330 21:27:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.330 21:27:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.330 21:27:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.330 21:27:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.330 21:27:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.330 21:27:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.330 21:27:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.330 21:27:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.330 21:27:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.330 21:27:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.330 21:27:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.330 21:27:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.330 21:27:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.330 21:27:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.330 21:27:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.330 21:27:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.330 21:27:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.330 21:27:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.330 21:27:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.330 21:27:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.330 21:27:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.330 21:27:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.330 21:27:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.330 21:27:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.330 21:27:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.330 21:27:20 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # IFS== 00:05:30.330 21:27:20 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:30.330 21:27:20 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:30.330 21:27:20 accel -- accel/accel.sh@75 -- # killprocess 263279 00:05:30.330 21:27:20 accel -- common/autotest_common.sh@948 -- # '[' -z 263279 ']' 00:05:30.330 21:27:20 accel -- common/autotest_common.sh@952 -- # kill -0 263279 00:05:30.330 21:27:20 accel -- common/autotest_common.sh@953 -- # uname 00:05:30.330 21:27:20 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.330 21:27:20 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 263279 00:05:30.330 21:27:20 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:30.330 21:27:20 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:30.330 21:27:20 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 263279' 00:05:30.330 killing process with pid 263279 00:05:30.330 21:27:20 accel -- common/autotest_common.sh@967 -- # kill 263279 00:05:30.330 21:27:20 accel -- common/autotest_common.sh@972 -- # wait 263279 00:05:30.588 21:27:21 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:30.588 21:27:21 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:30.588 21:27:21 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:30.588 21:27:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.588 21:27:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.588 21:27:21 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:30.588 21:27:21 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:30.588 21:27:21 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:30.588 21:27:21 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.588 21:27:21 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.588 21:27:21 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.588 21:27:21 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.588 21:27:21 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.588 21:27:21 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:30.588 21:27:21 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:30.588 21:27:21 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.588 21:27:21 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:30.588 21:27:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:30.588 21:27:21 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:30.588 21:27:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:30.588 21:27:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.588 21:27:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.588 ************************************ 00:05:30.588 START TEST accel_missing_filename 00:05:30.588 ************************************ 00:05:30.588 21:27:21 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:30.588 21:27:21 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:30.588 21:27:21 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:30.588 21:27:21 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:30.588 21:27:21 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.588 21:27:21 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:30.588 21:27:21 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.588 21:27:21 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:30.588 21:27:21 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:30.588 21:27:21 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:30.588 21:27:21 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.588 21:27:21 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.588 21:27:21 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.588 21:27:21 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.588 21:27:21 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.588 21:27:21 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:30.588 21:27:21 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:30.588 [2024-07-15 21:27:21.376213] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:30.588 [2024-07-15 21:27:21.376291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid263408 ] 00:05:30.845 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.845 [2024-07-15 21:27:21.439074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.845 [2024-07-15 21:27:21.558597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.845 [2024-07-15 21:27:21.609920] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:31.101 [2024-07-15 21:27:21.659042] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:31.101 A filename is required. 00:05:31.101 21:27:21 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:31.101 21:27:21 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:31.101 21:27:21 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:31.101 21:27:21 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:31.101 21:27:21 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:31.101 21:27:21 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:31.101 00:05:31.101 real 0m0.414s 00:05:31.101 user 0m0.319s 00:05:31.101 sys 0m0.131s 00:05:31.101 21:27:21 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.101 21:27:21 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:31.101 ************************************ 00:05:31.101 END TEST accel_missing_filename 00:05:31.101 ************************************ 00:05:31.101 21:27:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:31.101 21:27:21 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:31.101 21:27:21 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:31.101 21:27:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.101 21:27:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.101 ************************************ 00:05:31.101 START TEST accel_compress_verify 00:05:31.101 ************************************ 00:05:31.102 21:27:21 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:31.102 21:27:21 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:31.102 21:27:21 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:31.102 21:27:21 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:31.102 21:27:21 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.102 21:27:21 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:31.102 21:27:21 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.102 21:27:21 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:31.102 21:27:21 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:31.102 21:27:21 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:31.102 21:27:21 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.102 21:27:21 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.102 21:27:21 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.102 21:27:21 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.102 21:27:21 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.102 21:27:21 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:31.102 21:27:21 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:31.102 [2024-07-15 21:27:21.850642] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:31.102 [2024-07-15 21:27:21.850715] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid263437 ] 00:05:31.102 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.358 [2024-07-15 21:27:21.910312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.358 [2024-07-15 21:27:22.030153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.358 [2024-07-15 21:27:22.081798] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:31.358 [2024-07-15 21:27:22.131311] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:31.616 00:05:31.616 Compression does not support the verify option, aborting. 00:05:31.616 21:27:22 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:31.616 21:27:22 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:31.616 21:27:22 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:31.616 21:27:22 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:31.616 21:27:22 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:31.616 21:27:22 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:31.616 00:05:31.616 real 0m0.413s 00:05:31.616 user 0m0.319s 00:05:31.616 sys 0m0.129s 00:05:31.616 21:27:22 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.616 21:27:22 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:31.616 ************************************ 00:05:31.616 END TEST accel_compress_verify 00:05:31.616 ************************************ 00:05:31.616 21:27:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:31.616 21:27:22 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:31.616 21:27:22 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:31.616 21:27:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.616 21:27:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.616 ************************************ 00:05:31.616 START TEST accel_wrong_workload 00:05:31.616 ************************************ 00:05:31.616 21:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:31.616 21:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:31.616 21:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:31.616 21:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:31.616 21:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.616 21:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:31.616 21:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.616 21:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:31.616 21:27:22 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:31.616 21:27:22 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:31.616 21:27:22 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.616 21:27:22 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.616 21:27:22 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.616 21:27:22 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.616 21:27:22 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.616 21:27:22 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:31.616 21:27:22 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:31.616 Unsupported workload type: foobar 00:05:31.616 [2024-07-15 21:27:22.319767] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:31.616 accel_perf options: 00:05:31.616 [-h help message] 00:05:31.616 [-q queue depth per core] 00:05:31.616 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:31.616 [-T number of threads per core 00:05:31.616 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:31.616 [-t time in seconds] 00:05:31.616 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:31.616 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:31.616 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:31.616 [-l for compress/decompress workloads, name of uncompressed input file 00:05:31.616 [-S for crc32c workload, use this seed value (default 0) 00:05:31.616 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:31.616 [-f for fill workload, use this BYTE value (default 255) 00:05:31.616 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:31.616 [-y verify result if this switch is on] 00:05:31.616 [-a tasks to allocate per core (default: same value as -q)] 00:05:31.616 Can be used to spread operations across a wider range of memory. 00:05:31.616 21:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:31.616 21:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:31.616 21:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:31.616 21:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:31.616 00:05:31.616 real 0m0.026s 00:05:31.616 user 0m0.010s 00:05:31.616 sys 0m0.016s 00:05:31.616 21:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.616 21:27:22 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:31.616 ************************************ 00:05:31.616 END TEST accel_wrong_workload 00:05:31.616 ************************************ 00:05:31.616 Error: writing output failed: Broken pipe 00:05:31.616 21:27:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:31.616 21:27:22 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:31.616 21:27:22 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:31.616 21:27:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.616 21:27:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.616 ************************************ 00:05:31.616 START TEST accel_negative_buffers 00:05:31.616 ************************************ 00:05:31.616 21:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:31.616 21:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:31.616 21:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:31.616 21:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:31.616 21:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.616 21:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:31.616 21:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.616 21:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:31.616 21:27:22 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:31.616 21:27:22 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:31.616 21:27:22 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.616 21:27:22 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.616 21:27:22 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.616 21:27:22 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.617 21:27:22 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.617 21:27:22 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:31.617 21:27:22 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:31.617 -x option must be non-negative. 00:05:31.617 [2024-07-15 21:27:22.397278] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:31.617 accel_perf options: 00:05:31.617 [-h help message] 00:05:31.617 [-q queue depth per core] 00:05:31.617 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:31.617 [-T number of threads per core 00:05:31.617 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:31.617 [-t time in seconds] 00:05:31.617 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:31.617 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:31.617 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:31.617 [-l for compress/decompress workloads, name of uncompressed input file 00:05:31.617 [-S for crc32c workload, use this seed value (default 0) 00:05:31.617 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:31.617 [-f for fill workload, use this BYTE value (default 255) 00:05:31.617 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:31.617 [-y verify result if this switch is on] 00:05:31.617 [-a tasks to allocate per core (default: same value as -q)] 00:05:31.617 Can be used to spread operations across a wider range of memory. 00:05:31.617 21:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:31.617 21:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:31.617 21:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:31.617 21:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:31.617 00:05:31.617 real 0m0.026s 00:05:31.617 user 0m0.010s 00:05:31.617 sys 0m0.016s 00:05:31.617 21:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.617 21:27:22 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:31.617 ************************************ 00:05:31.617 END TEST accel_negative_buffers 00:05:31.617 ************************************ 00:05:31.874 Error: writing output failed: Broken pipe 00:05:31.874 21:27:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:31.874 21:27:22 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:31.874 21:27:22 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:31.874 21:27:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.874 21:27:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.874 ************************************ 00:05:31.874 START TEST accel_crc32c 00:05:31.874 ************************************ 00:05:31.874 21:27:22 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:31.874 21:27:22 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:31.874 21:27:22 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:31.874 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.874 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.874 21:27:22 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:31.874 21:27:22 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:31.874 21:27:22 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:31.874 21:27:22 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.874 21:27:22 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.874 21:27:22 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.874 21:27:22 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.874 21:27:22 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.874 21:27:22 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:31.874 21:27:22 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:31.874 [2024-07-15 21:27:22.467499] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:31.874 [2024-07-15 21:27:22.467574] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid263579 ] 00:05:31.874 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.874 [2024-07-15 21:27:22.529264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.874 [2024-07-15 21:27:22.647725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:32.132 21:27:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:33.063 21:27:23 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.063 00:05:33.063 real 0m1.397s 00:05:33.063 user 0m1.278s 00:05:33.063 sys 0m0.127s 00:05:33.063 21:27:23 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.063 21:27:23 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:33.063 ************************************ 00:05:33.063 END TEST accel_crc32c 00:05:33.063 ************************************ 00:05:33.321 21:27:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:33.321 21:27:23 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:33.321 21:27:23 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:33.321 21:27:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.321 21:27:23 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.321 ************************************ 00:05:33.321 START TEST accel_crc32c_C2 00:05:33.321 ************************************ 00:05:33.321 21:27:23 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:33.321 21:27:23 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:33.321 21:27:23 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:33.321 21:27:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.321 21:27:23 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:33.321 21:27:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.321 21:27:23 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:33.321 21:27:23 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:33.321 21:27:23 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.321 21:27:23 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.321 21:27:23 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.322 21:27:23 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.322 21:27:23 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.322 21:27:23 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:33.322 21:27:23 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:33.322 [2024-07-15 21:27:23.922038] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:33.322 [2024-07-15 21:27:23.922106] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid263717 ] 00:05:33.322 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.322 [2024-07-15 21:27:23.972340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.322 [2024-07-15 21:27:24.074217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.579 21:27:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.512 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.512 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.512 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.512 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.512 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.512 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.512 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.512 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.513 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.513 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.513 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.513 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.513 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.513 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.513 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.513 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.513 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.513 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.513 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.513 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.513 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.513 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.513 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.513 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.513 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.513 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:34.513 21:27:25 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.513 00:05:34.513 real 0m1.361s 00:05:34.513 user 0m1.253s 00:05:34.513 sys 0m0.114s 00:05:34.513 21:27:25 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.513 21:27:25 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:34.513 ************************************ 00:05:34.513 END TEST accel_crc32c_C2 00:05:34.513 ************************************ 00:05:34.513 21:27:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:34.513 21:27:25 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:34.513 21:27:25 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:34.513 21:27:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.513 21:27:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.771 ************************************ 00:05:34.771 START TEST accel_copy 00:05:34.771 ************************************ 00:05:34.771 21:27:25 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:34.771 [2024-07-15 21:27:25.339384] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:34.771 [2024-07-15 21:27:25.339453] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid263843 ] 00:05:34.771 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.771 [2024-07-15 21:27:25.390418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.771 [2024-07-15 21:27:25.491899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:34.771 21:27:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.145 21:27:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.145 21:27:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.145 21:27:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.145 21:27:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.145 21:27:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.145 21:27:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.145 21:27:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.145 21:27:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.145 21:27:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.145 21:27:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.145 21:27:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.145 21:27:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.145 21:27:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.145 21:27:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.145 21:27:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.145 21:27:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.145 21:27:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.145 21:27:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.145 21:27:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.145 21:27:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.145 21:27:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.145 21:27:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.145 21:27:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.145 21:27:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.146 21:27:26 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.146 21:27:26 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:36.146 21:27:26 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.146 00:05:36.146 real 0m1.356s 00:05:36.146 user 0m1.252s 00:05:36.146 sys 0m0.109s 00:05:36.146 21:27:26 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.146 21:27:26 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:36.146 ************************************ 00:05:36.146 END TEST accel_copy 00:05:36.146 ************************************ 00:05:36.146 21:27:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:36.146 21:27:26 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:36.146 21:27:26 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:36.146 21:27:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.146 21:27:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.146 ************************************ 00:05:36.146 START TEST accel_fill 00:05:36.146 ************************************ 00:05:36.146 21:27:26 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:36.146 21:27:26 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:36.146 21:27:26 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:36.146 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.146 21:27:26 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:36.146 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.146 21:27:26 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:36.146 21:27:26 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:36.146 21:27:26 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.146 21:27:26 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.146 21:27:26 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.146 21:27:26 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.146 21:27:26 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.146 21:27:26 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:36.146 21:27:26 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:36.146 [2024-07-15 21:27:26.749548] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:36.146 [2024-07-15 21:27:26.749609] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid263964 ] 00:05:36.146 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.146 [2024-07-15 21:27:26.798297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.146 [2024-07-15 21:27:26.897824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:36.404 21:27:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:37.337 21:27:28 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.337 00:05:37.337 real 0m1.356s 00:05:37.337 user 0m1.240s 00:05:37.337 sys 0m0.122s 00:05:37.337 21:27:28 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.337 21:27:28 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:37.337 ************************************ 00:05:37.337 END TEST accel_fill 00:05:37.337 ************************************ 00:05:37.337 21:27:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:37.337 21:27:28 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:37.337 21:27:28 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:37.337 21:27:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.337 21:27:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.596 ************************************ 00:05:37.596 START TEST accel_copy_crc32c 00:05:37.596 ************************************ 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:37.596 [2024-07-15 21:27:28.157247] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:37.596 [2024-07-15 21:27:28.157312] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid264176 ] 00:05:37.596 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.596 [2024-07-15 21:27:28.207309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.596 [2024-07-15 21:27:28.305264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.596 21:27:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.971 00:05:38.971 real 0m1.353s 00:05:38.971 user 0m1.245s 00:05:38.971 sys 0m0.116s 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.971 21:27:29 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:38.971 ************************************ 00:05:38.971 END TEST accel_copy_crc32c 00:05:38.971 ************************************ 00:05:38.971 21:27:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:38.971 21:27:29 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:38.971 21:27:29 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:38.971 21:27:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.971 21:27:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.971 ************************************ 00:05:38.971 START TEST accel_copy_crc32c_C2 00:05:38.971 ************************************ 00:05:38.971 21:27:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:38.971 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:38.971 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:38.971 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:38.971 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:38.971 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:38.971 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:38.971 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.971 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.971 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.971 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.971 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.971 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.971 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:38.971 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:38.971 [2024-07-15 21:27:29.563457] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:38.971 [2024-07-15 21:27:29.563531] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid264302 ] 00:05:38.971 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.971 [2024-07-15 21:27:29.617632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.971 [2024-07-15 21:27:29.719111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.230 21:27:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.166 00:05:40.166 real 0m1.365s 00:05:40.166 user 0m1.255s 00:05:40.166 sys 0m0.117s 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.166 21:27:30 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:40.166 ************************************ 00:05:40.166 END TEST accel_copy_crc32c_C2 00:05:40.166 ************************************ 00:05:40.166 21:27:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:40.166 21:27:30 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:40.166 21:27:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:40.166 21:27:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.166 21:27:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.425 ************************************ 00:05:40.425 START TEST accel_dualcast 00:05:40.425 ************************************ 00:05:40.425 21:27:30 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:40.425 21:27:30 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:40.425 21:27:30 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:40.425 21:27:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.425 21:27:30 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:40.425 21:27:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.425 21:27:30 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:40.425 21:27:30 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:40.425 21:27:30 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.425 21:27:30 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.425 21:27:30 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.425 21:27:30 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.425 21:27:30 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.425 21:27:30 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:40.425 21:27:30 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:40.425 [2024-07-15 21:27:30.986687] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:40.425 [2024-07-15 21:27:30.986759] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid264437 ] 00:05:40.425 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.425 [2024-07-15 21:27:31.042669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.425 [2024-07-15 21:27:31.145168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:40.425 21:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.426 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.426 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.426 21:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.426 21:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.426 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.426 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.426 21:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:40.426 21:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.426 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.426 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.426 21:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.426 21:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.426 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.426 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:40.426 21:27:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:40.426 21:27:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:40.426 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:40.426 21:27:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:41.798 21:27:32 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.798 00:05:41.798 real 0m1.366s 00:05:41.798 user 0m1.255s 00:05:41.798 sys 0m0.117s 00:05:41.798 21:27:32 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.798 21:27:32 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:41.798 ************************************ 00:05:41.798 END TEST accel_dualcast 00:05:41.798 ************************************ 00:05:41.798 21:27:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.798 21:27:32 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:41.798 21:27:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:41.798 21:27:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.798 21:27:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.798 ************************************ 00:05:41.798 START TEST accel_compare 00:05:41.798 ************************************ 00:05:41.798 21:27:32 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:41.798 21:27:32 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:41.798 21:27:32 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:41.798 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:41.798 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:41.798 21:27:32 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:41.798 21:27:32 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:41.798 21:27:32 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:41.798 21:27:32 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.798 21:27:32 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.798 21:27:32 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.798 21:27:32 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.798 21:27:32 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.798 21:27:32 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:41.798 21:27:32 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:41.798 [2024-07-15 21:27:32.404451] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:41.798 [2024-07-15 21:27:32.404520] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid264588 ] 00:05:41.798 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.798 [2024-07-15 21:27:32.461304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.798 [2024-07-15 21:27:32.559551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.056 21:27:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:42.987 21:27:33 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.987 00:05:42.987 real 0m1.354s 00:05:42.987 user 0m1.243s 00:05:42.987 sys 0m0.113s 00:05:42.987 21:27:33 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.987 21:27:33 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:42.987 ************************************ 00:05:42.987 END TEST accel_compare 00:05:42.987 ************************************ 00:05:42.987 21:27:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:42.987 21:27:33 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:42.987 21:27:33 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:42.987 21:27:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.988 21:27:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.245 ************************************ 00:05:43.245 START TEST accel_xor 00:05:43.245 ************************************ 00:05:43.245 21:27:33 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:43.245 21:27:33 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:43.245 21:27:33 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:43.245 21:27:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.245 21:27:33 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:43.245 21:27:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.245 21:27:33 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:43.245 21:27:33 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:43.245 21:27:33 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.245 21:27:33 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.245 21:27:33 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.245 21:27:33 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.245 21:27:33 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.245 21:27:33 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:43.245 21:27:33 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:43.245 [2024-07-15 21:27:33.813241] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:43.245 [2024-07-15 21:27:33.813316] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid264771 ] 00:05:43.245 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.245 [2024-07-15 21:27:33.867893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.245 [2024-07-15 21:27:33.969163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.245 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.246 21:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.246 21:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.246 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.246 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.246 21:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:43.246 21:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.246 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.246 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.246 21:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.246 21:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.246 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.246 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:43.246 21:27:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:43.246 21:27:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:43.246 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:43.246 21:27:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.617 00:05:44.617 real 0m1.362s 00:05:44.617 user 0m1.249s 00:05:44.617 sys 0m0.116s 00:05:44.617 21:27:35 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.617 21:27:35 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:44.617 ************************************ 00:05:44.617 END TEST accel_xor 00:05:44.617 ************************************ 00:05:44.617 21:27:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:44.617 21:27:35 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:44.617 21:27:35 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:44.617 21:27:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.617 21:27:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.617 ************************************ 00:05:44.617 START TEST accel_xor 00:05:44.617 ************************************ 00:05:44.617 21:27:35 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:44.617 21:27:35 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:44.617 [2024-07-15 21:27:35.230048] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:44.617 [2024-07-15 21:27:35.230119] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid264898 ] 00:05:44.617 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.617 [2024-07-15 21:27:35.280321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.617 [2024-07-15 21:27:35.378413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.875 21:27:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:45.807 21:27:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.807 00:05:45.807 real 0m1.350s 00:05:45.807 user 0m1.238s 00:05:45.807 sys 0m0.114s 00:05:45.807 21:27:36 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.807 21:27:36 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:45.807 ************************************ 00:05:45.807 END TEST accel_xor 00:05:45.807 ************************************ 00:05:45.807 21:27:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:45.807 21:27:36 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:45.807 21:27:36 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:45.807 21:27:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.807 21:27:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.064 ************************************ 00:05:46.064 START TEST accel_dif_verify 00:05:46.064 ************************************ 00:05:46.064 21:27:36 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:46.064 21:27:36 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:46.064 21:27:36 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:46.064 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.064 21:27:36 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:46.064 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.064 21:27:36 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:46.064 21:27:36 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:46.064 21:27:36 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.064 21:27:36 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.064 21:27:36 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.064 21:27:36 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.064 21:27:36 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.064 21:27:36 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:46.064 21:27:36 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:46.064 [2024-07-15 21:27:36.636101] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:46.064 [2024-07-15 21:27:36.636175] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid265022 ] 00:05:46.064 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.064 [2024-07-15 21:27:36.686485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.064 [2024-07-15 21:27:36.787157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.064 21:27:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:46.064 21:27:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:46.065 21:27:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:47.439 21:27:37 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.439 00:05:47.439 real 0m1.354s 00:05:47.439 user 0m1.244s 00:05:47.439 sys 0m0.113s 00:05:47.439 21:27:37 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.439 21:27:37 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:47.439 ************************************ 00:05:47.439 END TEST accel_dif_verify 00:05:47.439 ************************************ 00:05:47.439 21:27:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:47.439 21:27:37 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:47.439 21:27:37 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:47.439 21:27:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.439 21:27:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.439 ************************************ 00:05:47.439 START TEST accel_dif_generate 00:05:47.439 ************************************ 00:05:47.439 21:27:38 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:47.439 21:27:38 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:47.439 21:27:38 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:47.439 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.439 21:27:38 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:47.439 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.439 21:27:38 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:47.439 21:27:38 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:47.439 21:27:38 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.439 21:27:38 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.439 21:27:38 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.439 21:27:38 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.439 21:27:38 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.439 21:27:38 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:47.439 21:27:38 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:47.439 [2024-07-15 21:27:38.041230] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:47.439 [2024-07-15 21:27:38.041302] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid265226 ] 00:05:47.439 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.439 [2024-07-15 21:27:38.095610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.439 [2024-07-15 21:27:38.190245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:47.697 21:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:47.698 21:27:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.630 21:27:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:48.630 21:27:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.630 21:27:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.630 21:27:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.630 21:27:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:48.630 21:27:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.630 21:27:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.630 21:27:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.630 21:27:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:48.630 21:27:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.630 21:27:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.630 21:27:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.630 21:27:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:48.630 21:27:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.630 21:27:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.630 21:27:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.630 21:27:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:48.630 21:27:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.630 21:27:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.630 21:27:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.630 21:27:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:48.630 21:27:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:48.630 21:27:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:48.631 21:27:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:48.631 21:27:39 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.631 21:27:39 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:48.631 21:27:39 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.631 00:05:48.631 real 0m1.352s 00:05:48.631 user 0m1.240s 00:05:48.631 sys 0m0.116s 00:05:48.631 21:27:39 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.631 21:27:39 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:48.631 ************************************ 00:05:48.631 END TEST accel_dif_generate 00:05:48.631 ************************************ 00:05:48.631 21:27:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:48.631 21:27:39 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:48.631 21:27:39 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:48.631 21:27:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.631 21:27:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.890 ************************************ 00:05:48.890 START TEST accel_dif_generate_copy 00:05:48.890 ************************************ 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:48.890 [2024-07-15 21:27:39.450337] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:48.890 [2024-07-15 21:27:39.450406] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid265356 ] 00:05:48.890 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.890 [2024-07-15 21:27:39.501376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.890 [2024-07-15 21:27:39.602484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.890 21:27:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.259 00:05:50.259 real 0m1.365s 00:05:50.259 user 0m1.257s 00:05:50.259 sys 0m0.111s 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.259 21:27:40 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:50.259 ************************************ 00:05:50.259 END TEST accel_dif_generate_copy 00:05:50.259 ************************************ 00:05:50.259 21:27:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:50.259 21:27:40 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:50.259 21:27:40 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:50.259 21:27:40 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:50.259 21:27:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.259 21:27:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.259 ************************************ 00:05:50.259 START TEST accel_comp 00:05:50.259 ************************************ 00:05:50.259 21:27:40 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:50.259 21:27:40 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:50.259 21:27:40 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:50.259 21:27:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.259 21:27:40 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:50.259 21:27:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.259 21:27:40 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:50.259 21:27:40 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:50.259 21:27:40 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.259 21:27:40 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.259 21:27:40 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.259 21:27:40 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.259 21:27:40 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.259 21:27:40 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:50.259 21:27:40 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:50.259 [2024-07-15 21:27:40.874036] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:50.259 [2024-07-15 21:27:40.874102] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid265477 ] 00:05:50.259 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.259 [2024-07-15 21:27:40.924361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.260 [2024-07-15 21:27:41.025489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.516 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.517 21:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:50.517 21:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.517 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.517 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.517 21:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:50.517 21:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.517 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.517 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:50.517 21:27:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:50.517 21:27:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:50.517 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:50.517 21:27:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:51.446 21:27:42 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.446 00:05:51.446 real 0m1.365s 00:05:51.446 user 0m1.250s 00:05:51.446 sys 0m0.119s 00:05:51.446 21:27:42 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.446 21:27:42 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:51.446 ************************************ 00:05:51.446 END TEST accel_comp 00:05:51.446 ************************************ 00:05:51.703 21:27:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.703 21:27:42 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:51.703 21:27:42 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:51.703 21:27:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.703 21:27:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.703 ************************************ 00:05:51.703 START TEST accel_decomp 00:05:51.703 ************************************ 00:05:51.703 21:27:42 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:51.703 21:27:42 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:51.703 21:27:42 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:51.703 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.703 21:27:42 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:51.703 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.703 21:27:42 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:51.703 21:27:42 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:51.703 21:27:42 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.703 21:27:42 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.703 21:27:42 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.703 21:27:42 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.703 21:27:42 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.703 21:27:42 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:51.703 21:27:42 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:51.703 [2024-07-15 21:27:42.294787] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:51.703 [2024-07-15 21:27:42.294859] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid265602 ] 00:05:51.703 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.703 [2024-07-15 21:27:42.350296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.703 [2024-07-15 21:27:42.455089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.959 21:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.959 21:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.959 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.959 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.959 21:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.959 21:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.959 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.959 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.959 21:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.959 21:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:51.960 21:27:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:52.891 21:27:43 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.891 00:05:52.891 real 0m1.375s 00:05:52.891 user 0m1.257s 00:05:52.891 sys 0m0.121s 00:05:52.891 21:27:43 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.891 21:27:43 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:52.891 ************************************ 00:05:52.891 END TEST accel_decomp 00:05:52.891 ************************************ 00:05:52.891 21:27:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:52.891 21:27:43 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:52.891 21:27:43 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:52.891 21:27:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.891 21:27:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.150 ************************************ 00:05:53.150 START TEST accel_decomp_full 00:05:53.150 ************************************ 00:05:53.150 21:27:43 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:53.150 [2024-07-15 21:27:43.723203] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:53.150 [2024-07-15 21:27:43.723277] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid265815 ] 00:05:53.150 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.150 [2024-07-15 21:27:43.777101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.150 [2024-07-15 21:27:43.879320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:53.150 21:27:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:54.524 21:27:45 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.524 00:05:54.524 real 0m1.373s 00:05:54.524 user 0m1.255s 00:05:54.524 sys 0m0.121s 00:05:54.524 21:27:45 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.524 21:27:45 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:54.524 ************************************ 00:05:54.524 END TEST accel_decomp_full 00:05:54.524 ************************************ 00:05:54.524 21:27:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.524 21:27:45 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:54.524 21:27:45 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:54.524 21:27:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.524 21:27:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.524 ************************************ 00:05:54.524 START TEST accel_decomp_mcore 00:05:54.524 ************************************ 00:05:54.524 21:27:45 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:54.524 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:54.524 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:54.524 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.524 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:54.524 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.524 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:54.524 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:54.524 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.524 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.524 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.524 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.524 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.524 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:54.524 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:54.524 [2024-07-15 21:27:45.151903] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:54.524 [2024-07-15 21:27:45.151974] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid265935 ] 00:05:54.524 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.524 [2024-07-15 21:27:45.202852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:54.524 [2024-07-15 21:27:45.308443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.525 [2024-07-15 21:27:45.308561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.525 [2024-07-15 21:27:45.308641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.525 [2024-07-15 21:27:45.308652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:54.783 21:27:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.716 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.716 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.716 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.717 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.717 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.717 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.717 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.717 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.717 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.717 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.717 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.717 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.717 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.717 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.717 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.717 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.975 00:05:55.975 real 0m1.382s 00:05:55.975 user 0m4.564s 00:05:55.975 sys 0m0.126s 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.975 21:27:46 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:55.975 ************************************ 00:05:55.975 END TEST accel_decomp_mcore 00:05:55.975 ************************************ 00:05:55.975 21:27:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.975 21:27:46 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:55.975 21:27:46 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:55.975 21:27:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.975 21:27:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.975 ************************************ 00:05:55.975 START TEST accel_decomp_full_mcore 00:05:55.975 ************************************ 00:05:55.975 21:27:46 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:55.975 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:55.975 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:55.975 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.975 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:55.975 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.975 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:55.975 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:55.975 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.975 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.975 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.975 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.975 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.975 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:55.975 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:55.975 [2024-07-15 21:27:46.583462] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:55.975 [2024-07-15 21:27:46.583529] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid266065 ] 00:05:55.975 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.975 [2024-07-15 21:27:46.632133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:55.975 [2024-07-15 21:27:46.736253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.975 [2024-07-15 21:27:46.736409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.975 [2024-07-15 21:27:46.736516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.975 [2024-07-15 21:27:46.736524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.233 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.233 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.233 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.233 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.233 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.233 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.233 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.233 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.233 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.233 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.233 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.233 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.233 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:56.233 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.233 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.233 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.233 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.233 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.233 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.233 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.233 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.233 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.234 21:27:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.168 00:05:57.168 real 0m1.381s 00:05:57.168 user 0m4.600s 00:05:57.168 sys 0m0.123s 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.168 21:27:47 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:57.168 ************************************ 00:05:57.168 END TEST accel_decomp_full_mcore 00:05:57.168 ************************************ 00:05:57.426 21:27:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:57.426 21:27:47 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:57.426 21:27:47 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:57.426 21:27:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.426 21:27:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.426 ************************************ 00:05:57.426 START TEST accel_decomp_mthread 00:05:57.426 ************************************ 00:05:57.426 21:27:48 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:57.426 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:57.426 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:57.426 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.426 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:57.426 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.426 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:57.426 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:57.426 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.426 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.426 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.426 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.426 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.426 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:57.426 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:57.426 [2024-07-15 21:27:48.022328] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:57.426 [2024-07-15 21:27:48.022400] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid266270 ] 00:05:57.427 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.427 [2024-07-15 21:27:48.073409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.427 [2024-07-15 21:27:48.171609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.427 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:57.427 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.427 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.427 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.427 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:57.427 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.427 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.427 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.427 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:57.427 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.427 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.427 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.427 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:57.427 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.427 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.427 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.427 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:57.427 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.427 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:57.685 21:27:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.620 00:05:58.620 real 0m1.356s 00:05:58.620 user 0m1.256s 00:05:58.620 sys 0m0.103s 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.620 21:27:49 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:58.620 ************************************ 00:05:58.620 END TEST accel_decomp_mthread 00:05:58.620 ************************************ 00:05:58.620 21:27:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.620 21:27:49 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:58.620 21:27:49 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:58.620 21:27:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.620 21:27:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.620 ************************************ 00:05:58.620 START TEST accel_decomp_full_mthread 00:05:58.620 ************************************ 00:05:58.620 21:27:49 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:58.620 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:58.620 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:58.620 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.620 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:58.620 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.620 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:58.620 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:58.620 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.620 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.620 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.620 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.620 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.620 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:58.620 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:58.879 [2024-07-15 21:27:49.416880] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:05:58.879 [2024-07-15 21:27:49.416958] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid266405 ] 00:05:58.879 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.879 [2024-07-15 21:27:49.470851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.879 [2024-07-15 21:27:49.572099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.879 21:27:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.254 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.254 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.254 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.254 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.254 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.254 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.254 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.254 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.254 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.254 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.254 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.254 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.255 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.255 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.255 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.255 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.255 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.255 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.255 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.255 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.255 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.255 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.255 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.255 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.255 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.255 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.255 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.255 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.255 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.255 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:00.255 21:27:50 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.255 00:06:00.255 real 0m1.396s 00:06:00.255 user 0m1.283s 00:06:00.255 sys 0m0.116s 00:06:00.255 21:27:50 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.255 21:27:50 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:00.255 ************************************ 00:06:00.255 END TEST accel_decomp_full_mthread 00:06:00.255 ************************************ 00:06:00.255 21:27:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:00.255 21:27:50 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:00.255 21:27:50 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:00.255 21:27:50 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:00.255 21:27:50 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:00.255 21:27:50 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.255 21:27:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.255 21:27:50 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.255 21:27:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.255 21:27:50 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.255 21:27:50 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.255 21:27:50 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.255 21:27:50 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:00.255 21:27:50 accel -- accel/accel.sh@41 -- # jq -r . 00:06:00.255 ************************************ 00:06:00.255 START TEST accel_dif_functional_tests 00:06:00.255 ************************************ 00:06:00.255 21:27:50 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:00.255 [2024-07-15 21:27:50.895937] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:06:00.255 [2024-07-15 21:27:50.896035] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid266528 ] 00:06:00.255 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.255 [2024-07-15 21:27:50.950011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.514 [2024-07-15 21:27:51.054050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.514 [2024-07-15 21:27:51.054130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.514 [2024-07-15 21:27:51.054155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.514 00:06:00.514 00:06:00.514 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.514 http://cunit.sourceforge.net/ 00:06:00.514 00:06:00.514 00:06:00.514 Suite: accel_dif 00:06:00.514 Test: verify: DIF generated, GUARD check ...passed 00:06:00.514 Test: verify: DIF generated, APPTAG check ...passed 00:06:00.514 Test: verify: DIF generated, REFTAG check ...passed 00:06:00.514 Test: verify: DIF not generated, GUARD check ...[2024-07-15 21:27:51.133761] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:00.514 passed 00:06:00.514 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 21:27:51.133824] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:00.514 passed 00:06:00.514 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 21:27:51.133857] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:00.514 passed 00:06:00.514 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:00.514 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 21:27:51.133929] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:00.514 passed 00:06:00.514 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:00.514 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:00.514 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:00.514 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 21:27:51.134062] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:00.514 passed 00:06:00.514 Test: verify copy: DIF generated, GUARD check ...passed 00:06:00.514 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:00.514 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:00.514 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 21:27:51.134224] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:00.514 passed 00:06:00.514 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 21:27:51.134259] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:00.514 passed 00:06:00.514 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 21:27:51.134291] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:00.514 passed 00:06:00.514 Test: generate copy: DIF generated, GUARD check ...passed 00:06:00.514 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:00.514 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:00.514 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:00.514 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:00.514 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:00.514 Test: generate copy: iovecs-len validate ...[2024-07-15 21:27:51.134520] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:00.514 passed 00:06:00.514 Test: generate copy: buffer alignment validate ...passed 00:06:00.514 00:06:00.514 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.514 suites 1 1 n/a 0 0 00:06:00.514 tests 26 26 26 0 0 00:06:00.514 asserts 115 115 115 0 n/a 00:06:00.514 00:06:00.514 Elapsed time = 0.003 seconds 00:06:00.514 00:06:00.514 real 0m0.455s 00:06:00.514 user 0m0.633s 00:06:00.514 sys 0m0.144s 00:06:00.514 21:27:51 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.773 21:27:51 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:00.773 ************************************ 00:06:00.773 END TEST accel_dif_functional_tests 00:06:00.773 ************************************ 00:06:00.773 21:27:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:00.773 00:06:00.773 real 0m30.993s 00:06:00.773 user 0m34.591s 00:06:00.773 sys 0m4.057s 00:06:00.773 21:27:51 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.773 21:27:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.773 ************************************ 00:06:00.773 END TEST accel 00:06:00.773 ************************************ 00:06:00.773 21:27:51 -- common/autotest_common.sh@1142 -- # return 0 00:06:00.773 21:27:51 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:00.773 21:27:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.773 21:27:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.773 21:27:51 -- common/autotest_common.sh@10 -- # set +x 00:06:00.773 ************************************ 00:06:00.773 START TEST accel_rpc 00:06:00.773 ************************************ 00:06:00.773 21:27:51 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:00.773 * Looking for test storage... 00:06:00.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:00.773 21:27:51 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:00.773 21:27:51 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=266678 00:06:00.773 21:27:51 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 266678 00:06:00.773 21:27:51 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:00.773 21:27:51 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 266678 ']' 00:06:00.773 21:27:51 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.773 21:27:51 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.773 21:27:51 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.773 21:27:51 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.773 21:27:51 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.773 [2024-07-15 21:27:51.484412] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:06:00.773 [2024-07-15 21:27:51.484498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid266678 ] 00:06:00.773 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.773 [2024-07-15 21:27:51.533417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.032 [2024-07-15 21:27:51.631437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.032 21:27:51 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.032 21:27:51 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:01.032 21:27:51 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:01.032 21:27:51 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:01.032 21:27:51 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:01.032 21:27:51 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:01.032 21:27:51 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:01.032 21:27:51 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.032 21:27:51 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.032 21:27:51 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.032 ************************************ 00:06:01.032 START TEST accel_assign_opcode 00:06:01.032 ************************************ 00:06:01.032 21:27:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:01.032 21:27:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:01.032 21:27:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.032 21:27:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:01.032 [2024-07-15 21:27:51.716030] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:01.032 21:27:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.032 21:27:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:01.032 21:27:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.032 21:27:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:01.032 [2024-07-15 21:27:51.724031] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:01.032 21:27:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.032 21:27:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:01.032 21:27:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.032 21:27:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:01.304 21:27:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.304 21:27:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:01.304 21:27:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:01.304 21:27:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:01.304 21:27:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.304 21:27:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:01.304 21:27:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.304 software 00:06:01.304 00:06:01.304 real 0m0.253s 00:06:01.304 user 0m0.040s 00:06:01.304 sys 0m0.006s 00:06:01.304 21:27:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.304 21:27:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:01.304 ************************************ 00:06:01.304 END TEST accel_assign_opcode 00:06:01.304 ************************************ 00:06:01.304 21:27:51 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:01.304 21:27:51 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 266678 00:06:01.304 21:27:51 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 266678 ']' 00:06:01.304 21:27:51 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 266678 00:06:01.304 21:27:51 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:01.304 21:27:51 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.304 21:27:51 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 266678 00:06:01.304 21:27:52 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.304 21:27:52 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.304 21:27:52 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 266678' 00:06:01.304 killing process with pid 266678 00:06:01.304 21:27:52 accel_rpc -- common/autotest_common.sh@967 -- # kill 266678 00:06:01.304 21:27:52 accel_rpc -- common/autotest_common.sh@972 -- # wait 266678 00:06:01.598 00:06:01.598 real 0m0.908s 00:06:01.598 user 0m0.893s 00:06:01.598 sys 0m0.362s 00:06:01.598 21:27:52 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.598 21:27:52 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.598 ************************************ 00:06:01.598 END TEST accel_rpc 00:06:01.598 ************************************ 00:06:01.598 21:27:52 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.598 21:27:52 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:01.598 21:27:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.598 21:27:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.598 21:27:52 -- common/autotest_common.sh@10 -- # set +x 00:06:01.598 ************************************ 00:06:01.598 START TEST app_cmdline 00:06:01.598 ************************************ 00:06:01.598 21:27:52 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:01.894 * Looking for test storage... 00:06:01.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:01.894 21:27:52 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:01.894 21:27:52 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=266854 00:06:01.894 21:27:52 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:01.894 21:27:52 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 266854 00:06:01.894 21:27:52 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 266854 ']' 00:06:01.894 21:27:52 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.894 21:27:52 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.894 21:27:52 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.894 21:27:52 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.894 21:27:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:01.894 [2024-07-15 21:27:52.455306] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:06:01.894 [2024-07-15 21:27:52.455413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid266854 ] 00:06:01.894 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.894 [2024-07-15 21:27:52.514591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.894 [2024-07-15 21:27:52.631270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.174 21:27:52 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.174 21:27:52 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:02.174 21:27:52 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:02.449 { 00:06:02.449 "version": "SPDK v24.09-pre git sha1 996bd8752", 00:06:02.449 "fields": { 00:06:02.449 "major": 24, 00:06:02.449 "minor": 9, 00:06:02.449 "patch": 0, 00:06:02.449 "suffix": "-pre", 00:06:02.449 "commit": "996bd8752" 00:06:02.449 } 00:06:02.449 } 00:06:02.449 21:27:53 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:02.449 21:27:53 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:02.449 21:27:53 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:02.449 21:27:53 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:02.449 21:27:53 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:02.449 21:27:53 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:02.449 21:27:53 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.449 21:27:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:02.449 21:27:53 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:02.449 21:27:53 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.449 21:27:53 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:02.449 21:27:53 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:02.449 21:27:53 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:02.449 21:27:53 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:02.450 21:27:53 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:02.450 21:27:53 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:02.450 21:27:53 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.450 21:27:53 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:02.450 21:27:53 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.450 21:27:53 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:02.450 21:27:53 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.450 21:27:53 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:02.450 21:27:53 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:02.450 21:27:53 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:02.707 request: 00:06:02.707 { 00:06:02.707 "method": "env_dpdk_get_mem_stats", 00:06:02.707 "req_id": 1 00:06:02.707 } 00:06:02.707 Got JSON-RPC error response 00:06:02.707 response: 00:06:02.707 { 00:06:02.707 "code": -32601, 00:06:02.707 "message": "Method not found" 00:06:02.707 } 00:06:02.707 21:27:53 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:02.707 21:27:53 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:02.707 21:27:53 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:02.707 21:27:53 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:02.707 21:27:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 266854 00:06:02.707 21:27:53 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 266854 ']' 00:06:02.707 21:27:53 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 266854 00:06:02.707 21:27:53 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:02.707 21:27:53 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.707 21:27:53 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 266854 00:06:02.965 21:27:53 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:02.965 21:27:53 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:02.965 21:27:53 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 266854' 00:06:02.965 killing process with pid 266854 00:06:02.965 21:27:53 app_cmdline -- common/autotest_common.sh@967 -- # kill 266854 00:06:02.965 21:27:53 app_cmdline -- common/autotest_common.sh@972 -- # wait 266854 00:06:03.224 00:06:03.224 real 0m1.497s 00:06:03.224 user 0m1.991s 00:06:03.224 sys 0m0.430s 00:06:03.224 21:27:53 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.224 21:27:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:03.224 ************************************ 00:06:03.224 END TEST app_cmdline 00:06:03.224 ************************************ 00:06:03.224 21:27:53 -- common/autotest_common.sh@1142 -- # return 0 00:06:03.224 21:27:53 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:03.224 21:27:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.224 21:27:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.224 21:27:53 -- common/autotest_common.sh@10 -- # set +x 00:06:03.224 ************************************ 00:06:03.224 START TEST version 00:06:03.224 ************************************ 00:06:03.224 21:27:53 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:03.224 * Looking for test storage... 00:06:03.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:03.224 21:27:53 version -- app/version.sh@17 -- # get_header_version major 00:06:03.224 21:27:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:03.224 21:27:53 version -- app/version.sh@14 -- # cut -f2 00:06:03.224 21:27:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:03.224 21:27:53 version -- app/version.sh@17 -- # major=24 00:06:03.224 21:27:53 version -- app/version.sh@18 -- # get_header_version minor 00:06:03.224 21:27:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:03.224 21:27:53 version -- app/version.sh@14 -- # cut -f2 00:06:03.224 21:27:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:03.224 21:27:53 version -- app/version.sh@18 -- # minor=9 00:06:03.224 21:27:53 version -- app/version.sh@19 -- # get_header_version patch 00:06:03.224 21:27:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:03.224 21:27:53 version -- app/version.sh@14 -- # cut -f2 00:06:03.224 21:27:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:03.224 21:27:53 version -- app/version.sh@19 -- # patch=0 00:06:03.224 21:27:53 version -- app/version.sh@20 -- # get_header_version suffix 00:06:03.224 21:27:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:03.224 21:27:53 version -- app/version.sh@14 -- # cut -f2 00:06:03.224 21:27:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:03.224 21:27:53 version -- app/version.sh@20 -- # suffix=-pre 00:06:03.224 21:27:53 version -- app/version.sh@22 -- # version=24.9 00:06:03.224 21:27:53 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:03.224 21:27:53 version -- app/version.sh@28 -- # version=24.9rc0 00:06:03.224 21:27:53 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:03.224 21:27:53 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:03.224 21:27:53 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:03.224 21:27:53 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:03.224 00:06:03.224 real 0m0.111s 00:06:03.224 user 0m0.056s 00:06:03.224 sys 0m0.075s 00:06:03.224 21:27:54 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.224 21:27:54 version -- common/autotest_common.sh@10 -- # set +x 00:06:03.224 ************************************ 00:06:03.224 END TEST version 00:06:03.224 ************************************ 00:06:03.484 21:27:54 -- common/autotest_common.sh@1142 -- # return 0 00:06:03.484 21:27:54 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:03.484 21:27:54 -- spdk/autotest.sh@198 -- # uname -s 00:06:03.484 21:27:54 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:03.484 21:27:54 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:03.484 21:27:54 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:03.484 21:27:54 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:03.484 21:27:54 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:03.484 21:27:54 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:03.484 21:27:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:03.484 21:27:54 -- common/autotest_common.sh@10 -- # set +x 00:06:03.484 21:27:54 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:03.484 21:27:54 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:03.484 21:27:54 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:03.484 21:27:54 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:03.484 21:27:54 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:03.484 21:27:54 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:03.484 21:27:54 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:03.484 21:27:54 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:03.484 21:27:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.484 21:27:54 -- common/autotest_common.sh@10 -- # set +x 00:06:03.484 ************************************ 00:06:03.484 START TEST nvmf_tcp 00:06:03.484 ************************************ 00:06:03.484 21:27:54 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:03.484 * Looking for test storage... 00:06:03.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:03.484 21:27:54 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.484 21:27:54 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.484 21:27:54 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.484 21:27:54 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.484 21:27:54 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.484 21:27:54 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.484 21:27:54 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:03.484 21:27:54 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:03.484 21:27:54 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:03.484 21:27:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:03.484 21:27:54 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:03.484 21:27:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:03.484 21:27:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.484 21:27:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.484 ************************************ 00:06:03.484 START TEST nvmf_example 00:06:03.484 ************************************ 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:03.484 * Looking for test storage... 00:06:03.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.484 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:03.485 21:27:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:05.392 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:05.392 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:05.392 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:05.392 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:05.392 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:05.392 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:05.392 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:05.392 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:05.392 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:05.392 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:05.392 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:05.392 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:05.392 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:05.392 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:05.392 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:05.392 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:05.392 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:05.392 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:05.392 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:05.392 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:05.392 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:05.392 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:05.392 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:06:05.393 Found 0000:08:00.0 (0x8086 - 0x159b) 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:06:05.393 Found 0000:08:00.1 (0x8086 - 0x159b) 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:06:05.393 Found net devices under 0000:08:00.0: cvl_0_0 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:06:05.393 Found net devices under 0000:08:00.1: cvl_0_1 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:05.393 21:27:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:05.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:05.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:06:05.393 00:06:05.393 --- 10.0.0.2 ping statistics --- 00:06:05.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:05.393 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:05.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:05.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:06:05.393 00:06:05.393 --- 10.0.0.1 ping statistics --- 00:06:05.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:05.393 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=268356 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 268356 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 268356 ']' 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.393 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:05.393 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:05.652 21:27:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:05.910 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.894 Initializing NVMe Controllers 00:06:15.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:15.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:15.894 Initialization complete. Launching workers. 00:06:15.894 ======================================================== 00:06:15.894 Latency(us) 00:06:15.894 Device Information : IOPS MiB/s Average min max 00:06:15.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15412.94 60.21 4153.39 614.70 15439.73 00:06:15.894 ======================================================== 00:06:15.894 Total : 15412.94 60.21 4153.39 614.70 15439.73 00:06:15.894 00:06:15.894 21:28:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:15.894 21:28:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:15.894 21:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:15.894 21:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:15.894 21:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:15.894 21:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:15.894 21:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:15.894 21:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:15.894 rmmod nvme_tcp 00:06:15.894 rmmod nvme_fabrics 00:06:16.154 rmmod nvme_keyring 00:06:16.154 21:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:16.154 21:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:16.154 21:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:16.154 21:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 268356 ']' 00:06:16.154 21:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 268356 00:06:16.154 21:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 268356 ']' 00:06:16.154 21:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 268356 00:06:16.154 21:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:16.154 21:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.154 21:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 268356 00:06:16.154 21:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:16.154 21:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:16.154 21:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 268356' 00:06:16.154 killing process with pid 268356 00:06:16.154 21:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 268356 00:06:16.154 21:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 268356 00:06:16.154 nvmf threads initialize successfully 00:06:16.154 bdev subsystem init successfully 00:06:16.154 created a nvmf target service 00:06:16.154 create targets's poll groups done 00:06:16.154 all subsystems of target started 00:06:16.154 nvmf target is running 00:06:16.154 all subsystems of target stopped 00:06:16.154 destroy targets's poll groups done 00:06:16.154 destroyed the nvmf target service 00:06:16.154 bdev subsystem finish successfully 00:06:16.154 nvmf threads destroy successfully 00:06:16.154 21:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:16.154 21:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:16.154 21:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:16.154 21:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:16.154 21:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:16.154 21:28:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.154 21:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:16.154 21:28:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:18.695 21:28:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:18.695 21:28:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:18.695 21:28:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:18.695 21:28:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:18.695 00:06:18.695 real 0m14.804s 00:06:18.695 user 0m41.920s 00:06:18.695 sys 0m2.908s 00:06:18.695 21:28:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.695 21:28:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:18.695 ************************************ 00:06:18.695 END TEST nvmf_example 00:06:18.695 ************************************ 00:06:18.695 21:28:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:18.695 21:28:09 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:18.695 21:28:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:18.695 21:28:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.695 21:28:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.695 ************************************ 00:06:18.695 START TEST nvmf_filesystem 00:06:18.696 ************************************ 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:18.696 * Looking for test storage... 00:06:18.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:18.696 21:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:18.696 #define SPDK_CONFIG_H 00:06:18.696 #define SPDK_CONFIG_APPS 1 00:06:18.696 #define SPDK_CONFIG_ARCH native 00:06:18.696 #undef SPDK_CONFIG_ASAN 00:06:18.696 #undef SPDK_CONFIG_AVAHI 00:06:18.696 #undef SPDK_CONFIG_CET 00:06:18.696 #define SPDK_CONFIG_COVERAGE 1 00:06:18.696 #define SPDK_CONFIG_CROSS_PREFIX 00:06:18.696 #undef SPDK_CONFIG_CRYPTO 00:06:18.696 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:18.696 #undef SPDK_CONFIG_CUSTOMOCF 00:06:18.696 #undef SPDK_CONFIG_DAOS 00:06:18.696 #define SPDK_CONFIG_DAOS_DIR 00:06:18.696 #define SPDK_CONFIG_DEBUG 1 00:06:18.696 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:18.696 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:18.696 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:18.696 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:18.696 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:18.696 #undef SPDK_CONFIG_DPDK_UADK 00:06:18.696 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:18.697 #define SPDK_CONFIG_EXAMPLES 1 00:06:18.697 #undef SPDK_CONFIG_FC 00:06:18.697 #define SPDK_CONFIG_FC_PATH 00:06:18.697 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:18.697 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:18.697 #undef SPDK_CONFIG_FUSE 00:06:18.697 #undef SPDK_CONFIG_FUZZER 00:06:18.697 #define SPDK_CONFIG_FUZZER_LIB 00:06:18.697 #undef SPDK_CONFIG_GOLANG 00:06:18.697 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:18.697 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:18.697 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:18.697 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:18.697 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:18.697 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:18.697 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:18.697 #define SPDK_CONFIG_IDXD 1 00:06:18.697 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:18.697 #undef SPDK_CONFIG_IPSEC_MB 00:06:18.697 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:18.697 #define SPDK_CONFIG_ISAL 1 00:06:18.697 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:18.697 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:18.697 #define SPDK_CONFIG_LIBDIR 00:06:18.697 #undef SPDK_CONFIG_LTO 00:06:18.697 #define SPDK_CONFIG_MAX_LCORES 128 00:06:18.697 #define SPDK_CONFIG_NVME_CUSE 1 00:06:18.697 #undef SPDK_CONFIG_OCF 00:06:18.697 #define SPDK_CONFIG_OCF_PATH 00:06:18.697 #define SPDK_CONFIG_OPENSSL_PATH 00:06:18.697 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:18.697 #define SPDK_CONFIG_PGO_DIR 00:06:18.697 #undef SPDK_CONFIG_PGO_USE 00:06:18.697 #define SPDK_CONFIG_PREFIX /usr/local 00:06:18.697 #undef SPDK_CONFIG_RAID5F 00:06:18.697 #undef SPDK_CONFIG_RBD 00:06:18.697 #define SPDK_CONFIG_RDMA 1 00:06:18.697 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:18.697 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:18.697 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:18.697 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:18.697 #define SPDK_CONFIG_SHARED 1 00:06:18.697 #undef SPDK_CONFIG_SMA 00:06:18.697 #define SPDK_CONFIG_TESTS 1 00:06:18.697 #undef SPDK_CONFIG_TSAN 00:06:18.697 #define SPDK_CONFIG_UBLK 1 00:06:18.697 #define SPDK_CONFIG_UBSAN 1 00:06:18.697 #undef SPDK_CONFIG_UNIT_TESTS 00:06:18.697 #undef SPDK_CONFIG_URING 00:06:18.697 #define SPDK_CONFIG_URING_PATH 00:06:18.697 #undef SPDK_CONFIG_URING_ZNS 00:06:18.697 #undef SPDK_CONFIG_USDT 00:06:18.697 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:18.697 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:18.697 #define SPDK_CONFIG_VFIO_USER 1 00:06:18.697 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:18.697 #define SPDK_CONFIG_VHOST 1 00:06:18.697 #define SPDK_CONFIG_VIRTIO 1 00:06:18.697 #undef SPDK_CONFIG_VTUNE 00:06:18.697 #define SPDK_CONFIG_VTUNE_DIR 00:06:18.697 #define SPDK_CONFIG_WERROR 1 00:06:18.697 #define SPDK_CONFIG_WPDK_DIR 00:06:18.697 #undef SPDK_CONFIG_XNVME 00:06:18.697 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:18.697 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:18.698 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j32 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 269666 ]] 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 269666 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.IsDKPC 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.IsDKPC/tests/target /tmp/spdk.IsDKPC 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1957711872 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3326717952 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=44006895616 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=53546168320 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9539272704 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=26768371712 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=26773082112 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=10700734464 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=10709233664 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8499200 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=26772774912 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=26773086208 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=311296 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=5354610688 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5354614784 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:18.699 * Looking for test storage... 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=44006895616 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=11753865216 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:18.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:18.699 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:18.700 21:28:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.608 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:20.608 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:20.608 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:20.608 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:06:20.609 Found 0000:08:00.0 (0x8086 - 0x159b) 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:06:20.609 Found 0000:08:00.1 (0x8086 - 0x159b) 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:06:20.609 Found net devices under 0000:08:00.0: cvl_0_0 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:06:20.609 Found net devices under 0000:08:00.1: cvl_0_1 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:20.609 21:28:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:20.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:20.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:06:20.609 00:06:20.609 --- 10.0.0.2 ping statistics --- 00:06:20.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.609 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:20.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:20.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:06:20.609 00:06:20.609 --- 10.0.0.1 ping statistics --- 00:06:20.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.609 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.609 ************************************ 00:06:20.609 START TEST nvmf_filesystem_no_in_capsule 00:06:20.609 ************************************ 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=270856 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 270856 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 270856 ']' 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.609 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.610 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.610 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.610 [2024-07-15 21:28:11.134073] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:06:20.610 [2024-07-15 21:28:11.134180] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:20.610 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.610 [2024-07-15 21:28:11.200828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.610 [2024-07-15 21:28:11.323592] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:20.610 [2024-07-15 21:28:11.323648] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:20.610 [2024-07-15 21:28:11.323664] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:20.610 [2024-07-15 21:28:11.323678] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:20.610 [2024-07-15 21:28:11.323690] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:20.610 [2024-07-15 21:28:11.323758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.610 [2024-07-15 21:28:11.323811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.610 [2024-07-15 21:28:11.323840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.610 [2024-07-15 21:28:11.323843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.868 [2024-07-15 21:28:11.474935] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.868 Malloc1 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.868 [2024-07-15 21:28:11.633075] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:20.868 { 00:06:20.868 "name": "Malloc1", 00:06:20.868 "aliases": [ 00:06:20.868 "4694c5e7-af77-44f8-b258-40d3100d6d92" 00:06:20.868 ], 00:06:20.868 "product_name": "Malloc disk", 00:06:20.868 "block_size": 512, 00:06:20.868 "num_blocks": 1048576, 00:06:20.868 "uuid": "4694c5e7-af77-44f8-b258-40d3100d6d92", 00:06:20.868 "assigned_rate_limits": { 00:06:20.868 "rw_ios_per_sec": 0, 00:06:20.868 "rw_mbytes_per_sec": 0, 00:06:20.868 "r_mbytes_per_sec": 0, 00:06:20.868 "w_mbytes_per_sec": 0 00:06:20.868 }, 00:06:20.868 "claimed": true, 00:06:20.868 "claim_type": "exclusive_write", 00:06:20.868 "zoned": false, 00:06:20.868 "supported_io_types": { 00:06:20.868 "read": true, 00:06:20.868 "write": true, 00:06:20.868 "unmap": true, 00:06:20.868 "flush": true, 00:06:20.868 "reset": true, 00:06:20.868 "nvme_admin": false, 00:06:20.868 "nvme_io": false, 00:06:20.868 "nvme_io_md": false, 00:06:20.868 "write_zeroes": true, 00:06:20.868 "zcopy": true, 00:06:20.868 "get_zone_info": false, 00:06:20.868 "zone_management": false, 00:06:20.868 "zone_append": false, 00:06:20.868 "compare": false, 00:06:20.868 "compare_and_write": false, 00:06:20.868 "abort": true, 00:06:20.868 "seek_hole": false, 00:06:20.868 "seek_data": false, 00:06:20.868 "copy": true, 00:06:20.868 "nvme_iov_md": false 00:06:20.868 }, 00:06:20.868 "memory_domains": [ 00:06:20.868 { 00:06:20.868 "dma_device_id": "system", 00:06:20.868 "dma_device_type": 1 00:06:20.868 }, 00:06:20.868 { 00:06:20.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.868 "dma_device_type": 2 00:06:20.868 } 00:06:20.868 ], 00:06:20.868 "driver_specific": {} 00:06:20.868 } 00:06:20.868 ]' 00:06:20.868 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:21.125 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:21.125 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:21.125 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:21.125 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:21.125 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:21.125 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:21.125 21:28:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:21.691 21:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:21.691 21:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:21.691 21:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:21.691 21:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:21.691 21:28:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:23.587 21:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:23.587 21:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:23.587 21:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:23.587 21:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:23.587 21:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:23.587 21:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:23.587 21:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:23.587 21:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:23.587 21:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:23.587 21:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:23.587 21:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:23.587 21:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:23.587 21:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:23.587 21:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:23.587 21:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:23.587 21:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:23.587 21:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:23.844 21:28:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:41.912 21:28:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:41.912 21:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:41.912 21:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:41.912 21:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:41.912 21:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.912 21:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:42.169 ************************************ 00:06:42.169 START TEST filesystem_ext4 00:06:42.169 ************************************ 00:06:42.169 21:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:42.169 21:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:42.169 21:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:42.169 21:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:42.169 21:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:42.169 21:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:42.169 21:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:42.169 21:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:42.169 21:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:42.169 21:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:42.169 21:28:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:42.169 mke2fs 1.46.5 (30-Dec-2021) 00:06:42.169 Discarding device blocks: 0/522240 done 00:06:42.169 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:42.169 Filesystem UUID: 4f6d70c1-41ab-4979-880a-b13746d4ffd3 00:06:42.169 Superblock backups stored on blocks: 00:06:42.169 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:42.169 00:06:42.169 Allocating group tables: 0/64 done 00:06:42.169 Writing inode tables: 0/64 done 00:06:45.449 Creating journal (8192 blocks): done 00:06:45.449 Writing superblocks and filesystem accounting information: 0/64 done 00:06:45.449 00:06:45.449 21:28:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:45.449 21:28:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:45.449 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:45.449 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:45.449 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:45.449 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:45.449 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:45.449 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:45.449 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 270856 00:06:45.449 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:45.449 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:45.449 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:45.449 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:45.449 00:06:45.449 real 0m3.512s 00:06:45.449 user 0m0.016s 00:06:45.449 sys 0m0.064s 00:06:45.449 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.449 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:45.449 ************************************ 00:06:45.449 END TEST filesystem_ext4 00:06:45.449 ************************************ 00:06:45.706 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:45.706 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:45.706 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:45.706 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.706 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:45.706 ************************************ 00:06:45.706 START TEST filesystem_btrfs 00:06:45.707 ************************************ 00:06:45.707 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:45.707 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:45.707 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:45.707 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:45.707 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:45.707 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:45.707 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:45.707 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:45.707 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:45.707 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:45.707 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:45.963 btrfs-progs v6.6.2 00:06:45.963 See https://btrfs.readthedocs.io for more information. 00:06:45.963 00:06:45.963 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:45.963 NOTE: several default settings have changed in version 5.15, please make sure 00:06:45.963 this does not affect your deployments: 00:06:45.963 - DUP for metadata (-m dup) 00:06:45.963 - enabled no-holes (-O no-holes) 00:06:45.963 - enabled free-space-tree (-R free-space-tree) 00:06:45.963 00:06:45.963 Label: (null) 00:06:45.963 UUID: c946a243-39bd-4241-9111-3603ddc8da3c 00:06:45.963 Node size: 16384 00:06:45.963 Sector size: 4096 00:06:45.963 Filesystem size: 510.00MiB 00:06:45.963 Block group profiles: 00:06:45.963 Data: single 8.00MiB 00:06:45.963 Metadata: DUP 32.00MiB 00:06:45.963 System: DUP 8.00MiB 00:06:45.963 SSD detected: yes 00:06:45.963 Zoned device: no 00:06:45.963 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:45.963 Runtime features: free-space-tree 00:06:45.963 Checksum: crc32c 00:06:45.963 Number of devices: 1 00:06:45.963 Devices: 00:06:45.963 ID SIZE PATH 00:06:45.963 1 510.00MiB /dev/nvme0n1p1 00:06:45.963 00:06:45.963 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:45.963 21:28:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 270856 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:46.526 00:06:46.526 real 0m0.943s 00:06:46.526 user 0m0.024s 00:06:46.526 sys 0m0.151s 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:46.526 ************************************ 00:06:46.526 END TEST filesystem_btrfs 00:06:46.526 ************************************ 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:46.526 ************************************ 00:06:46.526 START TEST filesystem_xfs 00:06:46.526 ************************************ 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:46.526 21:28:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:46.783 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:46.783 = sectsz=512 attr=2, projid32bit=1 00:06:46.783 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:46.783 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:46.783 data = bsize=4096 blocks=130560, imaxpct=25 00:06:46.783 = sunit=0 swidth=0 blks 00:06:46.783 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:46.783 log =internal log bsize=4096 blocks=16384, version=2 00:06:46.783 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:46.783 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:47.711 Discarding blocks...Done. 00:06:47.711 21:28:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:47.711 21:28:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 270856 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:50.236 00:06:50.236 real 0m3.311s 00:06:50.236 user 0m0.016s 00:06:50.236 sys 0m0.096s 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:50.236 ************************************ 00:06:50.236 END TEST filesystem_xfs 00:06:50.236 ************************************ 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:50.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 270856 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 270856 ']' 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 270856 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 270856 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 270856' 00:06:50.236 killing process with pid 270856 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 270856 00:06:50.236 21:28:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 270856 00:06:50.802 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:50.802 00:06:50.802 real 0m30.222s 00:06:50.802 user 1m57.056s 00:06:50.802 sys 0m3.574s 00:06:50.802 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.802 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:50.802 ************************************ 00:06:50.802 END TEST nvmf_filesystem_no_in_capsule 00:06:50.802 ************************************ 00:06:50.802 21:28:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:50.802 21:28:41 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:50.802 21:28:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:50.802 21:28:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.802 21:28:41 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:50.802 ************************************ 00:06:50.802 START TEST nvmf_filesystem_in_capsule 00:06:50.802 ************************************ 00:06:50.802 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:50.802 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:50.802 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:50.802 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:50.802 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:50.802 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:50.802 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=273930 00:06:50.802 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:50.802 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 273930 00:06:50.802 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 273930 ']' 00:06:50.802 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.802 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.802 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.802 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.802 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:50.802 [2024-07-15 21:28:41.404749] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:06:50.802 [2024-07-15 21:28:41.404848] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.802 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.802 [2024-07-15 21:28:41.464994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:50.802 [2024-07-15 21:28:41.568551] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:50.802 [2024-07-15 21:28:41.568601] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:50.802 [2024-07-15 21:28:41.568627] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:50.802 [2024-07-15 21:28:41.568638] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:50.802 [2024-07-15 21:28:41.568648] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:50.802 [2024-07-15 21:28:41.568698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.802 [2024-07-15 21:28:41.568747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.802 [2024-07-15 21:28:41.568774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.802 [2024-07-15 21:28:41.568776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.060 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.060 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:51.060 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:51.060 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:51.060 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:51.060 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.060 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:51.060 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:51.060 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.060 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:51.060 [2024-07-15 21:28:41.709636] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.060 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.060 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:51.060 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.060 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:51.060 Malloc1 00:06:51.060 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.060 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:51.060 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.060 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:51.060 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.060 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:51.060 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.060 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:51.318 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.318 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:51.318 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.318 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:51.318 [2024-07-15 21:28:41.860557] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:51.318 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.318 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:51.318 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:51.318 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:51.318 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:51.318 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:51.318 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:51.318 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.318 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:51.318 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.318 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:51.318 { 00:06:51.318 "name": "Malloc1", 00:06:51.318 "aliases": [ 00:06:51.318 "86ad8ee6-698c-4731-9188-6a4108b349be" 00:06:51.318 ], 00:06:51.318 "product_name": "Malloc disk", 00:06:51.318 "block_size": 512, 00:06:51.318 "num_blocks": 1048576, 00:06:51.318 "uuid": "86ad8ee6-698c-4731-9188-6a4108b349be", 00:06:51.318 "assigned_rate_limits": { 00:06:51.318 "rw_ios_per_sec": 0, 00:06:51.318 "rw_mbytes_per_sec": 0, 00:06:51.318 "r_mbytes_per_sec": 0, 00:06:51.318 "w_mbytes_per_sec": 0 00:06:51.318 }, 00:06:51.318 "claimed": true, 00:06:51.318 "claim_type": "exclusive_write", 00:06:51.318 "zoned": false, 00:06:51.318 "supported_io_types": { 00:06:51.318 "read": true, 00:06:51.318 "write": true, 00:06:51.318 "unmap": true, 00:06:51.318 "flush": true, 00:06:51.318 "reset": true, 00:06:51.318 "nvme_admin": false, 00:06:51.318 "nvme_io": false, 00:06:51.318 "nvme_io_md": false, 00:06:51.318 "write_zeroes": true, 00:06:51.318 "zcopy": true, 00:06:51.318 "get_zone_info": false, 00:06:51.318 "zone_management": false, 00:06:51.318 "zone_append": false, 00:06:51.318 "compare": false, 00:06:51.318 "compare_and_write": false, 00:06:51.318 "abort": true, 00:06:51.318 "seek_hole": false, 00:06:51.318 "seek_data": false, 00:06:51.318 "copy": true, 00:06:51.318 "nvme_iov_md": false 00:06:51.318 }, 00:06:51.318 "memory_domains": [ 00:06:51.318 { 00:06:51.318 "dma_device_id": "system", 00:06:51.318 "dma_device_type": 1 00:06:51.318 }, 00:06:51.318 { 00:06:51.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.318 "dma_device_type": 2 00:06:51.318 } 00:06:51.318 ], 00:06:51.318 "driver_specific": {} 00:06:51.318 } 00:06:51.318 ]' 00:06:51.318 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:51.318 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:51.318 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:51.318 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:51.318 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:51.318 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:51.318 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:51.318 21:28:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:51.882 21:28:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:51.882 21:28:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:51.883 21:28:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:51.883 21:28:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:51.883 21:28:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:53.778 21:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:53.778 21:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:53.778 21:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:53.778 21:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:53.778 21:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:53.778 21:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:53.778 21:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:53.778 21:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:53.778 21:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:53.778 21:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:53.778 21:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:53.778 21:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:53.778 21:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:53.778 21:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:53.778 21:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:53.778 21:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:53.778 21:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:54.035 21:28:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:06.222 21:28:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:07.152 21:28:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:07.152 21:28:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:07.152 21:28:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:07.152 21:28:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.153 21:28:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.153 ************************************ 00:07:07.153 START TEST filesystem_in_capsule_ext4 00:07:07.153 ************************************ 00:07:07.153 21:28:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:07.153 21:28:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:07.153 21:28:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:07.153 21:28:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:07.153 21:28:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:07.153 21:28:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:07.153 21:28:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:07.153 21:28:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:07.153 21:28:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:07.153 21:28:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:07.153 21:28:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:07.153 mke2fs 1.46.5 (30-Dec-2021) 00:07:07.410 Discarding device blocks: 0/522240 done 00:07:07.410 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:07.410 Filesystem UUID: 55377185-3e7d-46b5-8e74-a06444865201 00:07:07.410 Superblock backups stored on blocks: 00:07:07.410 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:07.410 00:07:07.410 Allocating group tables: 0/64 done 00:07:07.410 Writing inode tables: 0/64 done 00:07:07.410 Creating journal (8192 blocks): done 00:07:07.666 Writing superblocks and filesystem accounting information: 0/64 done 00:07:07.667 00:07:07.667 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:07.667 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 273930 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:07.924 00:07:07.924 real 0m0.757s 00:07:07.924 user 0m0.012s 00:07:07.924 sys 0m0.062s 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:07.924 ************************************ 00:07:07.924 END TEST filesystem_in_capsule_ext4 00:07:07.924 ************************************ 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.924 ************************************ 00:07:07.924 START TEST filesystem_in_capsule_btrfs 00:07:07.924 ************************************ 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:07.924 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:07.925 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:07.925 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:08.182 btrfs-progs v6.6.2 00:07:08.182 See https://btrfs.readthedocs.io for more information. 00:07:08.182 00:07:08.182 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:08.182 NOTE: several default settings have changed in version 5.15, please make sure 00:07:08.182 this does not affect your deployments: 00:07:08.182 - DUP for metadata (-m dup) 00:07:08.182 - enabled no-holes (-O no-holes) 00:07:08.182 - enabled free-space-tree (-R free-space-tree) 00:07:08.182 00:07:08.182 Label: (null) 00:07:08.182 UUID: cc3a2e9b-2017-40b6-b7fb-a7c89652e494 00:07:08.182 Node size: 16384 00:07:08.182 Sector size: 4096 00:07:08.182 Filesystem size: 510.00MiB 00:07:08.182 Block group profiles: 00:07:08.182 Data: single 8.00MiB 00:07:08.182 Metadata: DUP 32.00MiB 00:07:08.182 System: DUP 8.00MiB 00:07:08.182 SSD detected: yes 00:07:08.182 Zoned device: no 00:07:08.182 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:08.182 Runtime features: free-space-tree 00:07:08.182 Checksum: crc32c 00:07:08.182 Number of devices: 1 00:07:08.182 Devices: 00:07:08.182 ID SIZE PATH 00:07:08.182 1 510.00MiB /dev/nvme0n1p1 00:07:08.182 00:07:08.182 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:08.182 21:28:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:08.439 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:08.439 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:08.439 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:08.439 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:08.439 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:08.439 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:08.439 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 273930 00:07:08.439 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:08.439 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:08.439 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:08.439 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:08.439 00:07:08.439 real 0m0.490s 00:07:08.439 user 0m0.017s 00:07:08.439 sys 0m0.118s 00:07:08.439 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.439 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:08.439 ************************************ 00:07:08.439 END TEST filesystem_in_capsule_btrfs 00:07:08.439 ************************************ 00:07:08.439 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:08.439 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:08.439 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:08.439 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.439 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.697 ************************************ 00:07:08.697 START TEST filesystem_in_capsule_xfs 00:07:08.697 ************************************ 00:07:08.697 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:08.697 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:08.697 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:08.697 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:08.697 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:08.697 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:08.697 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:08.697 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:08.697 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:08.697 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:08.697 21:28:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:08.697 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:08.697 = sectsz=512 attr=2, projid32bit=1 00:07:08.697 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:08.697 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:08.697 data = bsize=4096 blocks=130560, imaxpct=25 00:07:08.697 = sunit=0 swidth=0 blks 00:07:08.697 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:08.697 log =internal log bsize=4096 blocks=16384, version=2 00:07:08.697 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:08.697 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:09.263 Discarding blocks...Done. 00:07:09.263 21:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:09.263 21:29:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:11.790 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:11.790 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:11.790 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:11.790 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:11.790 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:11.790 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:11.790 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 273930 00:07:12.049 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:12.049 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:12.049 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:12.049 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:12.049 00:07:12.049 real 0m3.350s 00:07:12.049 user 0m0.019s 00:07:12.049 sys 0m0.060s 00:07:12.049 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.049 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:12.049 ************************************ 00:07:12.049 END TEST filesystem_in_capsule_xfs 00:07:12.049 ************************************ 00:07:12.049 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:12.049 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:12.049 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:12.049 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:12.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:12.049 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:12.049 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:12.049 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:12.049 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:12.308 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:12.308 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:12.308 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:12.308 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:12.308 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.308 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.308 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.308 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:12.308 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 273930 00:07:12.308 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 273930 ']' 00:07:12.308 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 273930 00:07:12.308 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:12.308 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:12.308 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 273930 00:07:12.308 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:12.308 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:12.308 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 273930' 00:07:12.308 killing process with pid 273930 00:07:12.308 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 273930 00:07:12.308 21:29:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 273930 00:07:12.567 21:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:12.567 00:07:12.567 real 0m21.846s 00:07:12.567 user 1m24.361s 00:07:12.567 sys 0m2.767s 00:07:12.567 21:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.567 21:29:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:12.567 ************************************ 00:07:12.567 END TEST nvmf_filesystem_in_capsule 00:07:12.567 ************************************ 00:07:12.567 21:29:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:12.567 21:29:03 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:12.567 21:29:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:12.567 21:29:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:12.567 21:29:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:12.567 21:29:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:12.567 21:29:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:12.567 21:29:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:12.567 rmmod nvme_tcp 00:07:12.567 rmmod nvme_fabrics 00:07:12.567 rmmod nvme_keyring 00:07:12.567 21:29:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:12.567 21:29:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:12.567 21:29:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:12.567 21:29:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:12.567 21:29:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:12.567 21:29:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:12.567 21:29:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:12.567 21:29:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:12.567 21:29:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:12.567 21:29:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.567 21:29:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:12.567 21:29:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.111 21:29:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:15.111 00:07:15.111 real 0m56.295s 00:07:15.111 user 3m22.207s 00:07:15.111 sys 0m7.755s 00:07:15.111 21:29:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.111 21:29:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:15.111 ************************************ 00:07:15.111 END TEST nvmf_filesystem 00:07:15.111 ************************************ 00:07:15.111 21:29:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:15.111 21:29:05 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:15.111 21:29:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:15.111 21:29:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.111 21:29:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:15.111 ************************************ 00:07:15.111 START TEST nvmf_target_discovery 00:07:15.111 ************************************ 00:07:15.111 21:29:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:15.111 * Looking for test storage... 00:07:15.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:15.111 21:29:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:15.111 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:15.111 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.111 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.111 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.111 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.111 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.111 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.111 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.111 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.111 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.111 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.111 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:15.111 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:15.111 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.111 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.111 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:15.111 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.111 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:15.111 21:29:05 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.111 21:29:05 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.111 21:29:05 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:15.112 21:29:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:16.491 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:16.491 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:16.491 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:16.491 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:16.491 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:16.491 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:16.491 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:16.491 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:16.491 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:16.491 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:16.491 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:16.491 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:16.491 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:16.491 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:16.491 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:16.491 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:16.491 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:16.491 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:16.491 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:16.491 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:16.491 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:16.491 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:16.491 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:16.492 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:16.492 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:16.492 Found net devices under 0000:08:00.0: cvl_0_0 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:16.492 Found net devices under 0000:08:00.1: cvl_0_1 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:16.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:16.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:07:16.492 00:07:16.492 --- 10.0.0.2 ping statistics --- 00:07:16.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.492 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:16.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:16.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:07:16.492 00:07:16.492 --- 10.0.0.1 ping statistics --- 00:07:16.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.492 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=277577 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 277577 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 277577 ']' 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:16.492 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:16.492 [2024-07-15 21:29:07.279505] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:07:16.492 [2024-07-15 21:29:07.279589] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.751 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.751 [2024-07-15 21:29:07.344184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:16.751 [2024-07-15 21:29:07.461147] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.751 [2024-07-15 21:29:07.461202] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.751 [2024-07-15 21:29:07.461219] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.751 [2024-07-15 21:29:07.461232] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.751 [2024-07-15 21:29:07.461244] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.751 [2024-07-15 21:29:07.461351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.751 [2024-07-15 21:29:07.461431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.751 [2024-07-15 21:29:07.461492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.751 [2024-07-15 21:29:07.461498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.010 [2024-07-15 21:29:07.603908] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.010 Null1 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.010 [2024-07-15 21:29:07.644177] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.010 Null2 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.010 Null3 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.010 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.011 Null4 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.011 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 4420 00:07:17.269 00:07:17.269 Discovery Log Number of Records 6, Generation counter 6 00:07:17.269 =====Discovery Log Entry 0====== 00:07:17.269 trtype: tcp 00:07:17.269 adrfam: ipv4 00:07:17.269 subtype: current discovery subsystem 00:07:17.269 treq: not required 00:07:17.269 portid: 0 00:07:17.269 trsvcid: 4420 00:07:17.269 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:17.269 traddr: 10.0.0.2 00:07:17.269 eflags: explicit discovery connections, duplicate discovery information 00:07:17.269 sectype: none 00:07:17.269 =====Discovery Log Entry 1====== 00:07:17.269 trtype: tcp 00:07:17.269 adrfam: ipv4 00:07:17.269 subtype: nvme subsystem 00:07:17.269 treq: not required 00:07:17.269 portid: 0 00:07:17.269 trsvcid: 4420 00:07:17.269 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:17.269 traddr: 10.0.0.2 00:07:17.269 eflags: none 00:07:17.269 sectype: none 00:07:17.269 =====Discovery Log Entry 2====== 00:07:17.269 trtype: tcp 00:07:17.269 adrfam: ipv4 00:07:17.269 subtype: nvme subsystem 00:07:17.269 treq: not required 00:07:17.269 portid: 0 00:07:17.269 trsvcid: 4420 00:07:17.269 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:17.269 traddr: 10.0.0.2 00:07:17.269 eflags: none 00:07:17.269 sectype: none 00:07:17.269 =====Discovery Log Entry 3====== 00:07:17.269 trtype: tcp 00:07:17.269 adrfam: ipv4 00:07:17.269 subtype: nvme subsystem 00:07:17.269 treq: not required 00:07:17.269 portid: 0 00:07:17.269 trsvcid: 4420 00:07:17.269 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:17.269 traddr: 10.0.0.2 00:07:17.269 eflags: none 00:07:17.269 sectype: none 00:07:17.269 =====Discovery Log Entry 4====== 00:07:17.269 trtype: tcp 00:07:17.269 adrfam: ipv4 00:07:17.269 subtype: nvme subsystem 00:07:17.269 treq: not required 00:07:17.269 portid: 0 00:07:17.269 trsvcid: 4420 00:07:17.269 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:17.269 traddr: 10.0.0.2 00:07:17.269 eflags: none 00:07:17.269 sectype: none 00:07:17.269 =====Discovery Log Entry 5====== 00:07:17.269 trtype: tcp 00:07:17.269 adrfam: ipv4 00:07:17.269 subtype: discovery subsystem referral 00:07:17.269 treq: not required 00:07:17.269 portid: 0 00:07:17.269 trsvcid: 4430 00:07:17.269 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:17.269 traddr: 10.0.0.2 00:07:17.269 eflags: none 00:07:17.269 sectype: none 00:07:17.269 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:17.269 Perform nvmf subsystem discovery via RPC 00:07:17.269 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:17.269 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.269 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.269 [ 00:07:17.269 { 00:07:17.269 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:17.269 "subtype": "Discovery", 00:07:17.269 "listen_addresses": [ 00:07:17.269 { 00:07:17.269 "trtype": "TCP", 00:07:17.269 "adrfam": "IPv4", 00:07:17.269 "traddr": "10.0.0.2", 00:07:17.269 "trsvcid": "4420" 00:07:17.269 } 00:07:17.269 ], 00:07:17.269 "allow_any_host": true, 00:07:17.269 "hosts": [] 00:07:17.269 }, 00:07:17.269 { 00:07:17.269 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:17.269 "subtype": "NVMe", 00:07:17.269 "listen_addresses": [ 00:07:17.269 { 00:07:17.269 "trtype": "TCP", 00:07:17.269 "adrfam": "IPv4", 00:07:17.269 "traddr": "10.0.0.2", 00:07:17.270 "trsvcid": "4420" 00:07:17.270 } 00:07:17.270 ], 00:07:17.270 "allow_any_host": true, 00:07:17.270 "hosts": [], 00:07:17.270 "serial_number": "SPDK00000000000001", 00:07:17.270 "model_number": "SPDK bdev Controller", 00:07:17.270 "max_namespaces": 32, 00:07:17.270 "min_cntlid": 1, 00:07:17.270 "max_cntlid": 65519, 00:07:17.270 "namespaces": [ 00:07:17.270 { 00:07:17.270 "nsid": 1, 00:07:17.270 "bdev_name": "Null1", 00:07:17.270 "name": "Null1", 00:07:17.270 "nguid": "4DF764E5671B451FBEFC8F362ACB3484", 00:07:17.270 "uuid": "4df764e5-671b-451f-befc-8f362acb3484" 00:07:17.270 } 00:07:17.270 ] 00:07:17.270 }, 00:07:17.270 { 00:07:17.270 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:17.270 "subtype": "NVMe", 00:07:17.270 "listen_addresses": [ 00:07:17.270 { 00:07:17.270 "trtype": "TCP", 00:07:17.270 "adrfam": "IPv4", 00:07:17.270 "traddr": "10.0.0.2", 00:07:17.270 "trsvcid": "4420" 00:07:17.270 } 00:07:17.270 ], 00:07:17.270 "allow_any_host": true, 00:07:17.270 "hosts": [], 00:07:17.270 "serial_number": "SPDK00000000000002", 00:07:17.270 "model_number": "SPDK bdev Controller", 00:07:17.270 "max_namespaces": 32, 00:07:17.270 "min_cntlid": 1, 00:07:17.270 "max_cntlid": 65519, 00:07:17.270 "namespaces": [ 00:07:17.270 { 00:07:17.270 "nsid": 1, 00:07:17.270 "bdev_name": "Null2", 00:07:17.270 "name": "Null2", 00:07:17.270 "nguid": "E84B13367AC14A469A859002BE3844C1", 00:07:17.270 "uuid": "e84b1336-7ac1-4a46-9a85-9002be3844c1" 00:07:17.270 } 00:07:17.270 ] 00:07:17.270 }, 00:07:17.270 { 00:07:17.270 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:17.270 "subtype": "NVMe", 00:07:17.270 "listen_addresses": [ 00:07:17.270 { 00:07:17.270 "trtype": "TCP", 00:07:17.270 "adrfam": "IPv4", 00:07:17.270 "traddr": "10.0.0.2", 00:07:17.270 "trsvcid": "4420" 00:07:17.270 } 00:07:17.270 ], 00:07:17.270 "allow_any_host": true, 00:07:17.270 "hosts": [], 00:07:17.270 "serial_number": "SPDK00000000000003", 00:07:17.270 "model_number": "SPDK bdev Controller", 00:07:17.270 "max_namespaces": 32, 00:07:17.270 "min_cntlid": 1, 00:07:17.270 "max_cntlid": 65519, 00:07:17.270 "namespaces": [ 00:07:17.270 { 00:07:17.270 "nsid": 1, 00:07:17.270 "bdev_name": "Null3", 00:07:17.270 "name": "Null3", 00:07:17.270 "nguid": "A71550A2C41B4656A686A83F0B2161FF", 00:07:17.270 "uuid": "a71550a2-c41b-4656-a686-a83f0b2161ff" 00:07:17.270 } 00:07:17.270 ] 00:07:17.270 }, 00:07:17.270 { 00:07:17.270 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:17.270 "subtype": "NVMe", 00:07:17.270 "listen_addresses": [ 00:07:17.270 { 00:07:17.270 "trtype": "TCP", 00:07:17.270 "adrfam": "IPv4", 00:07:17.270 "traddr": "10.0.0.2", 00:07:17.270 "trsvcid": "4420" 00:07:17.270 } 00:07:17.270 ], 00:07:17.270 "allow_any_host": true, 00:07:17.270 "hosts": [], 00:07:17.270 "serial_number": "SPDK00000000000004", 00:07:17.270 "model_number": "SPDK bdev Controller", 00:07:17.270 "max_namespaces": 32, 00:07:17.270 "min_cntlid": 1, 00:07:17.270 "max_cntlid": 65519, 00:07:17.270 "namespaces": [ 00:07:17.270 { 00:07:17.270 "nsid": 1, 00:07:17.270 "bdev_name": "Null4", 00:07:17.270 "name": "Null4", 00:07:17.270 "nguid": "D635A9AEF98C4E958FC1042A39A52E99", 00:07:17.270 "uuid": "d635a9ae-f98c-4e95-8fc1-042a39a52e99" 00:07:17.270 } 00:07:17.270 ] 00:07:17.270 } 00:07:17.270 ] 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.270 21:29:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.270 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.270 21:29:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:17.270 21:29:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:17.270 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.270 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.270 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.270 21:29:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:17.270 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.270 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.270 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.270 21:29:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:17.270 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.270 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.270 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.270 21:29:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:17.270 21:29:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:17.270 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.270 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:17.271 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.528 21:29:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:17.528 21:29:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:17.528 21:29:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:17.528 21:29:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:17.528 21:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:17.528 21:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:17.528 21:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:17.528 21:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:17.528 21:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:17.528 21:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:17.528 rmmod nvme_tcp 00:07:17.528 rmmod nvme_fabrics 00:07:17.528 rmmod nvme_keyring 00:07:17.528 21:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:17.528 21:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:17.528 21:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:17.528 21:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 277577 ']' 00:07:17.528 21:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 277577 00:07:17.528 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 277577 ']' 00:07:17.528 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 277577 00:07:17.528 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:17.528 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:17.529 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 277577 00:07:17.529 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:17.529 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:17.529 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 277577' 00:07:17.529 killing process with pid 277577 00:07:17.529 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 277577 00:07:17.529 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 277577 00:07:17.788 21:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:17.788 21:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:17.788 21:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:17.788 21:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:17.788 21:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:17.788 21:29:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.788 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:17.788 21:29:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.691 21:29:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:19.691 00:07:19.691 real 0m5.022s 00:07:19.691 user 0m4.248s 00:07:19.691 sys 0m1.539s 00:07:19.692 21:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.692 21:29:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:19.692 ************************************ 00:07:19.692 END TEST nvmf_target_discovery 00:07:19.692 ************************************ 00:07:19.692 21:29:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:19.692 21:29:10 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:19.692 21:29:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:19.692 21:29:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.692 21:29:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:19.692 ************************************ 00:07:19.692 START TEST nvmf_referrals 00:07:19.692 ************************************ 00:07:19.692 21:29:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:19.951 * Looking for test storage... 00:07:19.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.951 21:29:10 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:19.952 21:29:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:21.852 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:21.852 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:21.852 Found net devices under 0000:08:00.0: cvl_0_0 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.852 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:21.853 Found net devices under 0000:08:00.1: cvl_0_1 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:21.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:07:21.853 00:07:21.853 --- 10.0.0.2 ping statistics --- 00:07:21.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.853 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:21.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:07:21.853 00:07:21.853 --- 10.0.0.1 ping statistics --- 00:07:21.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.853 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=279203 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 279203 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 279203 ']' 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.853 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:21.853 [2024-07-15 21:29:12.398390] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:07:21.853 [2024-07-15 21:29:12.398497] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.853 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.853 [2024-07-15 21:29:12.464132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.853 [2024-07-15 21:29:12.584100] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.853 [2024-07-15 21:29:12.584163] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.853 [2024-07-15 21:29:12.584180] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.853 [2024-07-15 21:29:12.584193] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.853 [2024-07-15 21:29:12.584204] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.853 [2024-07-15 21:29:12.584283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.853 [2024-07-15 21:29:12.584363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.853 [2024-07-15 21:29:12.584445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.853 [2024-07-15 21:29:12.584450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.111 [2024-07-15 21:29:12.736880] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.111 [2024-07-15 21:29:12.749050] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.111 21:29:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:22.112 21:29:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:22.112 21:29:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:22.112 21:29:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:22.112 21:29:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:22.112 21:29:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:22.112 21:29:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:22.112 21:29:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:22.369 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:22.626 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:22.882 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:22.882 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:22.882 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:22.882 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:22.882 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:22.882 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:22.882 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:22.882 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:22.882 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:22.882 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:22.882 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:22.882 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:22.882 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:23.149 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:23.149 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:23.149 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.149 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:23.149 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.149 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:23.149 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:23.149 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:23.149 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:23.149 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.149 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:23.149 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:23.149 21:29:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.149 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:23.149 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:23.149 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:23.149 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:23.149 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:23.149 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:23.149 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:23.149 21:29:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:23.405 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:23.405 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:23.405 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:23.405 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:23.405 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:23.405 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:23.405 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:23.405 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:23.661 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:23.661 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:23.661 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:23.661 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:23.662 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:23.662 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:23.662 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:23.662 21:29:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.662 21:29:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:23.662 21:29:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.662 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:23.662 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:23.662 21:29:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.662 21:29:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:23.662 21:29:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.662 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:23.662 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:23.662 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:23.662 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:23.662 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:23.662 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:23.662 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:23.919 rmmod nvme_tcp 00:07:23.919 rmmod nvme_fabrics 00:07:23.919 rmmod nvme_keyring 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 279203 ']' 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 279203 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 279203 ']' 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 279203 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 279203 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 279203' 00:07:23.919 killing process with pid 279203 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 279203 00:07:23.919 21:29:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 279203 00:07:24.179 21:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:24.179 21:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:24.179 21:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:24.179 21:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:24.179 21:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:24.179 21:29:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.179 21:29:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:24.179 21:29:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.084 21:29:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:26.084 00:07:26.084 real 0m6.371s 00:07:26.084 user 0m10.089s 00:07:26.084 sys 0m1.837s 00:07:26.084 21:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.084 21:29:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:26.084 ************************************ 00:07:26.084 END TEST nvmf_referrals 00:07:26.084 ************************************ 00:07:26.084 21:29:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:26.084 21:29:16 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:26.084 21:29:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:26.084 21:29:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.084 21:29:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:26.343 ************************************ 00:07:26.343 START TEST nvmf_connect_disconnect 00:07:26.343 ************************************ 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:26.343 * Looking for test storage... 00:07:26.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:26.343 21:29:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:28.249 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:28.249 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:28.249 Found net devices under 0000:08:00.0: cvl_0_0 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:28.249 Found net devices under 0000:08:00.1: cvl_0_1 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:28.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:07:28.249 00:07:28.249 --- 10.0.0.2 ping statistics --- 00:07:28.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.249 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:28.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:07:28.249 00:07:28.249 --- 10.0.0.1 ping statistics --- 00:07:28.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.249 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:28.249 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=280920 00:07:28.250 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:28.250 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 280920 00:07:28.250 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 280920 ']' 00:07:28.250 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.250 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.250 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.250 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.250 21:29:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:28.250 [2024-07-15 21:29:18.813494] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:07:28.250 [2024-07-15 21:29:18.813597] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.250 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.250 [2024-07-15 21:29:18.879328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.250 [2024-07-15 21:29:18.999373] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.250 [2024-07-15 21:29:18.999432] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.250 [2024-07-15 21:29:18.999447] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.250 [2024-07-15 21:29:18.999461] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.250 [2024-07-15 21:29:18.999473] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.250 [2024-07-15 21:29:18.999553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.250 [2024-07-15 21:29:18.999637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.250 [2024-07-15 21:29:18.999687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.250 [2024-07-15 21:29:18.999691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.508 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.508 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:28.508 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:28.508 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:28.508 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:28.508 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:28.508 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:28.508 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.508 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:28.508 [2024-07-15 21:29:19.148944] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:28.508 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.508 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:28.508 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.508 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:28.508 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.508 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:28.508 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:28.509 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.509 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:28.509 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.509 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:28.509 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.509 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:28.509 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.509 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:28.509 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.509 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:28.509 [2024-07-15 21:29:19.198863] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.509 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.509 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:28.509 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:28.509 21:29:19 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:31.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:33.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:36.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:38.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.300 21:29:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:41.300 21:29:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:41.300 21:29:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:41.300 21:29:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:41.300 21:29:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:41.300 21:29:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:41.300 21:29:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:41.300 21:29:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:41.300 rmmod nvme_tcp 00:07:41.300 rmmod nvme_fabrics 00:07:41.300 rmmod nvme_keyring 00:07:41.300 21:29:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:41.300 21:29:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:41.300 21:29:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:41.300 21:29:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 280920 ']' 00:07:41.300 21:29:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 280920 00:07:41.300 21:29:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 280920 ']' 00:07:41.300 21:29:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 280920 00:07:41.300 21:29:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:07:41.300 21:29:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:41.300 21:29:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 280920 00:07:41.300 21:29:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:41.300 21:29:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:41.300 21:29:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 280920' 00:07:41.300 killing process with pid 280920 00:07:41.300 21:29:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 280920 00:07:41.300 21:29:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 280920 00:07:41.558 21:29:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:41.558 21:29:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:41.558 21:29:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:41.558 21:29:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:41.558 21:29:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:41.558 21:29:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.558 21:29:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:41.558 21:29:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.090 21:29:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:44.090 00:07:44.090 real 0m17.401s 00:07:44.090 user 0m52.346s 00:07:44.090 sys 0m3.010s 00:07:44.090 21:29:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.090 21:29:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:44.090 ************************************ 00:07:44.090 END TEST nvmf_connect_disconnect 00:07:44.090 ************************************ 00:07:44.090 21:29:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:44.090 21:29:34 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:44.090 21:29:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:44.090 21:29:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.090 21:29:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:44.090 ************************************ 00:07:44.090 START TEST nvmf_multitarget 00:07:44.090 ************************************ 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:44.090 * Looking for test storage... 00:07:44.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:07:44.090 21:29:34 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:45.466 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.466 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:45.466 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:45.467 Found net devices under 0000:08:00.0: cvl_0_0 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:45.467 Found net devices under 0000:08:00.1: cvl_0_1 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:45.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:07:45.467 00:07:45.467 --- 10.0.0.2 ping statistics --- 00:07:45.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.467 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:45.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:07:45.467 00:07:45.467 --- 10.0.0.1 ping statistics --- 00:07:45.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.467 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=283726 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 283726 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 283726 ']' 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:45.467 21:29:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:45.725 [2024-07-15 21:29:36.281430] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:07:45.725 [2024-07-15 21:29:36.281518] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.725 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.725 [2024-07-15 21:29:36.346926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.725 [2024-07-15 21:29:36.464168] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.725 [2024-07-15 21:29:36.464225] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.725 [2024-07-15 21:29:36.464241] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:45.725 [2024-07-15 21:29:36.464262] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:45.725 [2024-07-15 21:29:36.464274] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.725 [2024-07-15 21:29:36.464386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.725 [2024-07-15 21:29:36.464462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.725 [2024-07-15 21:29:36.464559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.725 [2024-07-15 21:29:36.464566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.984 21:29:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:45.984 21:29:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:07:45.984 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:45.984 21:29:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:45.984 21:29:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:45.984 21:29:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.984 21:29:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:45.984 21:29:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:45.984 21:29:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:45.984 21:29:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:45.984 21:29:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:46.241 "nvmf_tgt_1" 00:07:46.241 21:29:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:46.241 "nvmf_tgt_2" 00:07:46.241 21:29:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:46.241 21:29:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:46.500 21:29:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:46.500 21:29:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:46.500 true 00:07:46.500 21:29:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:46.758 true 00:07:46.758 21:29:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:46.758 21:29:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:46.758 21:29:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:46.758 21:29:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:46.758 21:29:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:46.758 21:29:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:46.758 21:29:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:46.758 21:29:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:46.758 21:29:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:46.758 21:29:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:46.758 21:29:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:46.758 rmmod nvme_tcp 00:07:46.758 rmmod nvme_fabrics 00:07:46.758 rmmod nvme_keyring 00:07:46.758 21:29:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:46.758 21:29:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:46.758 21:29:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:46.758 21:29:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 283726 ']' 00:07:46.758 21:29:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 283726 00:07:46.758 21:29:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 283726 ']' 00:07:46.758 21:29:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 283726 00:07:46.758 21:29:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:07:46.758 21:29:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:47.017 21:29:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 283726 00:07:47.017 21:29:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:47.017 21:29:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:47.017 21:29:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 283726' 00:07:47.017 killing process with pid 283726 00:07:47.017 21:29:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 283726 00:07:47.017 21:29:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 283726 00:07:47.017 21:29:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:47.017 21:29:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:47.017 21:29:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:47.017 21:29:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:47.017 21:29:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:47.017 21:29:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.017 21:29:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.017 21:29:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.559 21:29:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:49.559 00:07:49.559 real 0m5.476s 00:07:49.559 user 0m6.595s 00:07:49.559 sys 0m1.717s 00:07:49.559 21:29:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.559 21:29:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:49.559 ************************************ 00:07:49.559 END TEST nvmf_multitarget 00:07:49.559 ************************************ 00:07:49.559 21:29:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:49.559 21:29:39 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:49.559 21:29:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:49.559 21:29:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.559 21:29:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:49.559 ************************************ 00:07:49.559 START TEST nvmf_rpc 00:07:49.559 ************************************ 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:49.559 * Looking for test storage... 00:07:49.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:07:49.559 21:29:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.939 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.939 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:50.939 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:50.939 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:50.939 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:50.939 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:50.939 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:50.939 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:50.939 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:50.939 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:50.940 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:50.940 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:50.940 Found net devices under 0000:08:00.0: cvl_0_0 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:50.940 Found net devices under 0000:08:00.1: cvl_0_1 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:50.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:07:50.940 00:07:50.940 --- 10.0.0.2 ping statistics --- 00:07:50.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.940 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:50.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:07:50.940 00:07:50.940 --- 10.0.0.1 ping statistics --- 00:07:50.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.940 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:50.940 21:29:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.199 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=285359 00:07:51.199 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 285359 00:07:51.199 21:29:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 285359 ']' 00:07:51.200 21:29:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:51.200 21:29:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.200 21:29:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.200 21:29:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.200 21:29:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.200 21:29:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.200 [2024-07-15 21:29:41.785346] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:07:51.200 [2024-07-15 21:29:41.785453] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.200 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.200 [2024-07-15 21:29:41.856180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.200 [2024-07-15 21:29:41.974998] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.200 [2024-07-15 21:29:41.975057] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.200 [2024-07-15 21:29:41.975073] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.200 [2024-07-15 21:29:41.975093] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.200 [2024-07-15 21:29:41.975105] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.200 [2024-07-15 21:29:41.975166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.200 [2024-07-15 21:29:41.975199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.200 [2024-07-15 21:29:41.975263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.200 [2024-07-15 21:29:41.975273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:51.459 "tick_rate": 2700000000, 00:07:51.459 "poll_groups": [ 00:07:51.459 { 00:07:51.459 "name": "nvmf_tgt_poll_group_000", 00:07:51.459 "admin_qpairs": 0, 00:07:51.459 "io_qpairs": 0, 00:07:51.459 "current_admin_qpairs": 0, 00:07:51.459 "current_io_qpairs": 0, 00:07:51.459 "pending_bdev_io": 0, 00:07:51.459 "completed_nvme_io": 0, 00:07:51.459 "transports": [] 00:07:51.459 }, 00:07:51.459 { 00:07:51.459 "name": "nvmf_tgt_poll_group_001", 00:07:51.459 "admin_qpairs": 0, 00:07:51.459 "io_qpairs": 0, 00:07:51.459 "current_admin_qpairs": 0, 00:07:51.459 "current_io_qpairs": 0, 00:07:51.459 "pending_bdev_io": 0, 00:07:51.459 "completed_nvme_io": 0, 00:07:51.459 "transports": [] 00:07:51.459 }, 00:07:51.459 { 00:07:51.459 "name": "nvmf_tgt_poll_group_002", 00:07:51.459 "admin_qpairs": 0, 00:07:51.459 "io_qpairs": 0, 00:07:51.459 "current_admin_qpairs": 0, 00:07:51.459 "current_io_qpairs": 0, 00:07:51.459 "pending_bdev_io": 0, 00:07:51.459 "completed_nvme_io": 0, 00:07:51.459 "transports": [] 00:07:51.459 }, 00:07:51.459 { 00:07:51.459 "name": "nvmf_tgt_poll_group_003", 00:07:51.459 "admin_qpairs": 0, 00:07:51.459 "io_qpairs": 0, 00:07:51.459 "current_admin_qpairs": 0, 00:07:51.459 "current_io_qpairs": 0, 00:07:51.459 "pending_bdev_io": 0, 00:07:51.459 "completed_nvme_io": 0, 00:07:51.459 "transports": [] 00:07:51.459 } 00:07:51.459 ] 00:07:51.459 }' 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.459 [2024-07-15 21:29:42.217213] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:51.459 "tick_rate": 2700000000, 00:07:51.459 "poll_groups": [ 00:07:51.459 { 00:07:51.459 "name": "nvmf_tgt_poll_group_000", 00:07:51.459 "admin_qpairs": 0, 00:07:51.459 "io_qpairs": 0, 00:07:51.459 "current_admin_qpairs": 0, 00:07:51.459 "current_io_qpairs": 0, 00:07:51.459 "pending_bdev_io": 0, 00:07:51.459 "completed_nvme_io": 0, 00:07:51.459 "transports": [ 00:07:51.459 { 00:07:51.459 "trtype": "TCP" 00:07:51.459 } 00:07:51.459 ] 00:07:51.459 }, 00:07:51.459 { 00:07:51.459 "name": "nvmf_tgt_poll_group_001", 00:07:51.459 "admin_qpairs": 0, 00:07:51.459 "io_qpairs": 0, 00:07:51.459 "current_admin_qpairs": 0, 00:07:51.459 "current_io_qpairs": 0, 00:07:51.459 "pending_bdev_io": 0, 00:07:51.459 "completed_nvme_io": 0, 00:07:51.459 "transports": [ 00:07:51.459 { 00:07:51.459 "trtype": "TCP" 00:07:51.459 } 00:07:51.459 ] 00:07:51.459 }, 00:07:51.459 { 00:07:51.459 "name": "nvmf_tgt_poll_group_002", 00:07:51.459 "admin_qpairs": 0, 00:07:51.459 "io_qpairs": 0, 00:07:51.459 "current_admin_qpairs": 0, 00:07:51.459 "current_io_qpairs": 0, 00:07:51.459 "pending_bdev_io": 0, 00:07:51.459 "completed_nvme_io": 0, 00:07:51.459 "transports": [ 00:07:51.459 { 00:07:51.459 "trtype": "TCP" 00:07:51.459 } 00:07:51.459 ] 00:07:51.459 }, 00:07:51.459 { 00:07:51.459 "name": "nvmf_tgt_poll_group_003", 00:07:51.459 "admin_qpairs": 0, 00:07:51.459 "io_qpairs": 0, 00:07:51.459 "current_admin_qpairs": 0, 00:07:51.459 "current_io_qpairs": 0, 00:07:51.459 "pending_bdev_io": 0, 00:07:51.459 "completed_nvme_io": 0, 00:07:51.459 "transports": [ 00:07:51.459 { 00:07:51.459 "trtype": "TCP" 00:07:51.459 } 00:07:51.459 ] 00:07:51.459 } 00:07:51.459 ] 00:07:51.459 }' 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:51.459 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.717 Malloc1 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.717 [2024-07-15 21:29:42.374321] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:07:51.717 [2024-07-15 21:29:42.396778] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc' 00:07:51.717 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:51.717 could not add new controller: failed to write to nvme-fabrics device 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.717 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.718 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:52.283 21:29:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:52.283 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:52.283 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:52.283 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:52.283 21:29:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:54.177 21:29:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:54.177 21:29:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:54.177 21:29:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:54.177 21:29:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:54.177 21:29:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:54.177 21:29:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:54.177 21:29:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:54.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:54.435 21:29:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:54.435 21:29:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:54.435 21:29:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:54.435 21:29:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:54.436 21:29:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:54.436 21:29:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:54.436 [2024-07-15 21:29:45.034146] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc' 00:07:54.436 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:54.436 could not add new controller: failed to write to nvme-fabrics device 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.436 21:29:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:55.000 21:29:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:55.000 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:55.000 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:55.000 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:55.000 21:29:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:56.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.895 [2024-07-15 21:29:47.638729] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.895 21:29:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:57.459 21:29:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:57.459 21:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:57.460 21:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:57.460 21:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:57.460 21:29:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:59.355 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:59.355 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:59.355 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:59.355 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:59.355 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:59.355 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:59.355 21:29:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:59.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.612 [2024-07-15 21:29:50.240722] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.612 21:29:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:00.175 21:29:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:00.175 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:00.175 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:00.175 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:00.175 21:29:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:02.066 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:02.066 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:02.066 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:02.066 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:02.066 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:02.066 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:02.066 21:29:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:02.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.066 21:29:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:02.066 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:02.066 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:02.066 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.066 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:02.066 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.066 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:02.066 21:29:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.066 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.066 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.323 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.323 21:29:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.323 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.323 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.323 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.323 21:29:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:02.323 21:29:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:02.323 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.323 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.323 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.323 21:29:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.323 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.323 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.323 [2024-07-15 21:29:52.885561] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.323 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.323 21:29:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:02.323 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.323 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.323 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.323 21:29:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:02.323 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.323 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.323 21:29:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.323 21:29:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:02.581 21:29:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:02.581 21:29:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:02.581 21:29:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:02.581 21:29:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:02.581 21:29:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:05.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.102 [2024-07-15 21:29:55.465343] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.102 21:29:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:05.360 21:29:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:05.360 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:05.360 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:05.360 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:05.360 21:29:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:07.256 21:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:07.256 21:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:07.256 21:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:07.256 21:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:07.256 21:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:07.256 21:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:07.256 21:29:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:07.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.256 21:29:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:07.256 21:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:07.256 21:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:07.256 21:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:07.256 21:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:07.256 21:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:07.256 21:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:07.256 21:29:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.256 21:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.256 21:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.256 21:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.256 21:29:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:07.256 21:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.256 21:29:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.256 21:29:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.256 21:29:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:07.256 21:29:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:07.256 21:29:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.256 21:29:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.256 21:29:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.256 21:29:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.257 21:29:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.257 21:29:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.257 [2024-07-15 21:29:58.019758] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.257 21:29:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.257 21:29:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:07.257 21:29:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.257 21:29:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.257 21:29:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.257 21:29:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:07.257 21:29:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.257 21:29:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.257 21:29:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.257 21:29:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:07.820 21:29:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:07.820 21:29:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:07.820 21:29:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:07.820 21:29:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:07.820 21:29:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:10.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.346 [2024-07-15 21:30:00.747906] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.346 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 [2024-07-15 21:30:00.796004] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 [2024-07-15 21:30:00.844167] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 [2024-07-15 21:30:00.892327] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 [2024-07-15 21:30:00.940460] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.347 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:10.347 "tick_rate": 2700000000, 00:08:10.347 "poll_groups": [ 00:08:10.347 { 00:08:10.347 "name": "nvmf_tgt_poll_group_000", 00:08:10.347 "admin_qpairs": 2, 00:08:10.347 "io_qpairs": 56, 00:08:10.347 "current_admin_qpairs": 0, 00:08:10.347 "current_io_qpairs": 0, 00:08:10.347 "pending_bdev_io": 0, 00:08:10.347 "completed_nvme_io": 148, 00:08:10.347 "transports": [ 00:08:10.347 { 00:08:10.347 "trtype": "TCP" 00:08:10.347 } 00:08:10.347 ] 00:08:10.347 }, 00:08:10.347 { 00:08:10.347 "name": "nvmf_tgt_poll_group_001", 00:08:10.347 "admin_qpairs": 2, 00:08:10.347 "io_qpairs": 56, 00:08:10.347 "current_admin_qpairs": 0, 00:08:10.347 "current_io_qpairs": 0, 00:08:10.347 "pending_bdev_io": 0, 00:08:10.347 "completed_nvme_io": 137, 00:08:10.347 "transports": [ 00:08:10.347 { 00:08:10.347 "trtype": "TCP" 00:08:10.348 } 00:08:10.348 ] 00:08:10.348 }, 00:08:10.348 { 00:08:10.348 "name": "nvmf_tgt_poll_group_002", 00:08:10.348 "admin_qpairs": 1, 00:08:10.348 "io_qpairs": 56, 00:08:10.348 "current_admin_qpairs": 0, 00:08:10.348 "current_io_qpairs": 0, 00:08:10.348 "pending_bdev_io": 0, 00:08:10.348 "completed_nvme_io": 186, 00:08:10.348 "transports": [ 00:08:10.348 { 00:08:10.348 "trtype": "TCP" 00:08:10.348 } 00:08:10.348 ] 00:08:10.348 }, 00:08:10.348 { 00:08:10.348 "name": "nvmf_tgt_poll_group_003", 00:08:10.348 "admin_qpairs": 2, 00:08:10.348 "io_qpairs": 56, 00:08:10.348 "current_admin_qpairs": 0, 00:08:10.348 "current_io_qpairs": 0, 00:08:10.348 "pending_bdev_io": 0, 00:08:10.348 "completed_nvme_io": 103, 00:08:10.348 "transports": [ 00:08:10.348 { 00:08:10.348 "trtype": "TCP" 00:08:10.348 } 00:08:10.348 ] 00:08:10.348 } 00:08:10.348 ] 00:08:10.348 }' 00:08:10.348 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:10.348 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:10.348 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:10.348 21:30:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 224 > 0 )) 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:10.348 rmmod nvme_tcp 00:08:10.348 rmmod nvme_fabrics 00:08:10.348 rmmod nvme_keyring 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 285359 ']' 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 285359 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 285359 ']' 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 285359 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:10.348 21:30:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 285359 00:08:10.605 21:30:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:10.605 21:30:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:10.605 21:30:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 285359' 00:08:10.605 killing process with pid 285359 00:08:10.605 21:30:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 285359 00:08:10.605 21:30:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 285359 00:08:10.605 21:30:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:10.605 21:30:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:10.605 21:30:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:10.605 21:30:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:10.605 21:30:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:10.605 21:30:01 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.605 21:30:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.605 21:30:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.142 21:30:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:13.142 00:08:13.142 real 0m23.536s 00:08:13.142 user 1m16.648s 00:08:13.142 sys 0m3.758s 00:08:13.142 21:30:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.142 21:30:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.142 ************************************ 00:08:13.142 END TEST nvmf_rpc 00:08:13.142 ************************************ 00:08:13.142 21:30:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:13.142 21:30:03 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:13.142 21:30:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:13.142 21:30:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.142 21:30:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:13.142 ************************************ 00:08:13.142 START TEST nvmf_invalid 00:08:13.142 ************************************ 00:08:13.142 21:30:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:13.142 * Looking for test storage... 00:08:13.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.142 21:30:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.142 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:13.142 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.142 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:08:13.143 21:30:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:14.520 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:14.520 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.520 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:14.521 Found net devices under 0000:08:00.0: cvl_0_0 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:14.521 Found net devices under 0000:08:00.1: cvl_0_1 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:14.521 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:14.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:08:14.779 00:08:14.779 --- 10.0.0.2 ping statistics --- 00:08:14.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.779 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:14.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:08:14.779 00:08:14.779 --- 10.0.0.1 ping statistics --- 00:08:14.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.779 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=288889 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 288889 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 288889 ']' 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:14.779 21:30:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:14.779 [2024-07-15 21:30:05.431606] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:08:14.779 [2024-07-15 21:30:05.431705] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.779 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.779 [2024-07-15 21:30:05.500954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.037 [2024-07-15 21:30:05.619920] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.037 [2024-07-15 21:30:05.619978] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.037 [2024-07-15 21:30:05.619994] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.037 [2024-07-15 21:30:05.620008] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.037 [2024-07-15 21:30:05.620020] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.037 [2024-07-15 21:30:05.620074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.037 [2024-07-15 21:30:05.620127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.037 [2024-07-15 21:30:05.620210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.037 [2024-07-15 21:30:05.620176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.037 21:30:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:15.037 21:30:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:08:15.037 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:15.037 21:30:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:15.037 21:30:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:15.037 21:30:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.037 21:30:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:15.037 21:30:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10725 00:08:15.294 [2024-07-15 21:30:06.039730] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:15.294 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:08:15.294 { 00:08:15.294 "nqn": "nqn.2016-06.io.spdk:cnode10725", 00:08:15.294 "tgt_name": "foobar", 00:08:15.294 "method": "nvmf_create_subsystem", 00:08:15.294 "req_id": 1 00:08:15.294 } 00:08:15.294 Got JSON-RPC error response 00:08:15.294 response: 00:08:15.294 { 00:08:15.294 "code": -32603, 00:08:15.294 "message": "Unable to find target foobar" 00:08:15.294 }' 00:08:15.294 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:08:15.294 { 00:08:15.294 "nqn": "nqn.2016-06.io.spdk:cnode10725", 00:08:15.294 "tgt_name": "foobar", 00:08:15.294 "method": "nvmf_create_subsystem", 00:08:15.294 "req_id": 1 00:08:15.294 } 00:08:15.294 Got JSON-RPC error response 00:08:15.294 response: 00:08:15.294 { 00:08:15.294 "code": -32603, 00:08:15.294 "message": "Unable to find target foobar" 00:08:15.294 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:15.294 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:15.294 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6468 00:08:15.551 [2024-07-15 21:30:06.332656] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6468: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:15.808 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:08:15.808 { 00:08:15.808 "nqn": "nqn.2016-06.io.spdk:cnode6468", 00:08:15.808 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:15.808 "method": "nvmf_create_subsystem", 00:08:15.808 "req_id": 1 00:08:15.808 } 00:08:15.808 Got JSON-RPC error response 00:08:15.808 response: 00:08:15.808 { 00:08:15.808 "code": -32602, 00:08:15.808 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:15.808 }' 00:08:15.808 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:08:15.808 { 00:08:15.808 "nqn": "nqn.2016-06.io.spdk:cnode6468", 00:08:15.808 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:15.808 "method": "nvmf_create_subsystem", 00:08:15.808 "req_id": 1 00:08:15.808 } 00:08:15.808 Got JSON-RPC error response 00:08:15.808 response: 00:08:15.808 { 00:08:15.808 "code": -32602, 00:08:15.808 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:15.808 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:15.808 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:15.808 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14088 00:08:15.808 [2024-07-15 21:30:06.585457] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14088: invalid model number 'SPDK_Controller' 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:08:16.066 { 00:08:16.066 "nqn": "nqn.2016-06.io.spdk:cnode14088", 00:08:16.066 "model_number": "SPDK_Controller\u001f", 00:08:16.066 "method": "nvmf_create_subsystem", 00:08:16.066 "req_id": 1 00:08:16.066 } 00:08:16.066 Got JSON-RPC error response 00:08:16.066 response: 00:08:16.066 { 00:08:16.066 "code": -32602, 00:08:16.066 "message": "Invalid MN SPDK_Controller\u001f" 00:08:16.066 }' 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:08:16.066 { 00:08:16.066 "nqn": "nqn.2016-06.io.spdk:cnode14088", 00:08:16.066 "model_number": "SPDK_Controller\u001f", 00:08:16.066 "method": "nvmf_create_subsystem", 00:08:16.066 "req_id": 1 00:08:16.066 } 00:08:16.066 Got JSON-RPC error response 00:08:16.066 response: 00:08:16.066 { 00:08:16.066 "code": -32602, 00:08:16.066 "message": "Invalid MN SPDK_Controller\u001f" 00:08:16.066 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.066 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ! == \- ]] 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '!Z_%EtvuP]2`?h&RGjO!Y' 00:08:16.067 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '!Z_%EtvuP]2`?h&RGjO!Y' nqn.2016-06.io.spdk:cnode5848 00:08:16.326 [2024-07-15 21:30:06.958683] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5848: invalid serial number '!Z_%EtvuP]2`?h&RGjO!Y' 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:08:16.326 { 00:08:16.326 "nqn": "nqn.2016-06.io.spdk:cnode5848", 00:08:16.326 "serial_number": "!Z_%EtvuP]2`?h&RGjO!Y", 00:08:16.326 "method": "nvmf_create_subsystem", 00:08:16.326 "req_id": 1 00:08:16.326 } 00:08:16.326 Got JSON-RPC error response 00:08:16.326 response: 00:08:16.326 { 00:08:16.326 "code": -32602, 00:08:16.326 "message": "Invalid SN !Z_%EtvuP]2`?h&RGjO!Y" 00:08:16.326 }' 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:08:16.326 { 00:08:16.326 "nqn": "nqn.2016-06.io.spdk:cnode5848", 00:08:16.326 "serial_number": "!Z_%EtvuP]2`?h&RGjO!Y", 00:08:16.326 "method": "nvmf_create_subsystem", 00:08:16.326 "req_id": 1 00:08:16.326 } 00:08:16.326 Got JSON-RPC error response 00:08:16.326 response: 00:08:16.326 { 00:08:16.326 "code": -32602, 00:08:16.326 "message": "Invalid SN !Z_%EtvuP]2`?h&RGjO!Y" 00:08:16.326 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:08:16.326 21:30:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.326 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.327 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.584 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:08:16.584 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:08:16.584 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:08:16.584 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.584 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.584 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:08:16.584 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:08:16.584 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:08:16.584 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.584 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.585 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:08:16.585 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:08:16.585 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:08:16.585 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.585 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.585 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:08:16.585 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:16.585 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:08:16.585 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:16.585 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:16.585 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ # == \- ]] 00:08:16.585 21:30:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '#3n~XC@K4haH\0f. /dev/null' 00:08:19.470 21:30:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.025 21:30:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:22.025 00:08:22.025 real 0m8.811s 00:08:22.025 user 0m22.381s 00:08:22.025 sys 0m2.227s 00:08:22.025 21:30:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.025 21:30:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:22.025 ************************************ 00:08:22.025 END TEST nvmf_invalid 00:08:22.025 ************************************ 00:08:22.025 21:30:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:22.025 21:30:12 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:22.025 21:30:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:22.025 21:30:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.025 21:30:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:22.025 ************************************ 00:08:22.025 START TEST nvmf_abort 00:08:22.025 ************************************ 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:22.026 * Looking for test storage... 00:08:22.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:22.026 21:30:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:23.405 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:23.405 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.405 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:23.406 Found net devices under 0000:08:00.0: cvl_0_0 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:23.406 Found net devices under 0000:08:00.1: cvl_0_1 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.406 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:23.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:08:23.664 00:08:23.664 --- 10.0.0.2 ping statistics --- 00:08:23.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.664 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:08:23.664 00:08:23.664 --- 10.0.0.1 ping statistics --- 00:08:23.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.664 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=291530 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 291530 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 291530 ']' 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:23.664 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:23.664 [2024-07-15 21:30:14.348803] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:08:23.664 [2024-07-15 21:30:14.348908] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.664 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.664 [2024-07-15 21:30:14.429161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:23.921 [2024-07-15 21:30:14.585150] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.921 [2024-07-15 21:30:14.585227] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.921 [2024-07-15 21:30:14.585258] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.921 [2024-07-15 21:30:14.585285] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.921 [2024-07-15 21:30:14.585308] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.921 [2024-07-15 21:30:14.585416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.921 [2024-07-15 21:30:14.585485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.921 [2024-07-15 21:30:14.585475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.921 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:23.921 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:08:23.921 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:23.921 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:23.921 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:24.179 [2024-07-15 21:30:14.745236] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:24.179 Malloc0 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:24.179 Delay0 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:24.179 [2024-07-15 21:30:14.817485] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.179 21:30:14 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:24.179 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.180 [2024-07-15 21:30:14.923091] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:26.703 Initializing NVMe Controllers 00:08:26.704 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:26.704 controller IO queue size 128 less than required 00:08:26.704 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:26.704 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:26.704 Initialization complete. Launching workers. 00:08:26.704 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 33556 00:08:26.704 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33621, failed to submit 62 00:08:26.704 success 33560, unsuccess 61, failed 0 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:26.704 rmmod nvme_tcp 00:08:26.704 rmmod nvme_fabrics 00:08:26.704 rmmod nvme_keyring 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 291530 ']' 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 291530 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 291530 ']' 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 291530 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 291530 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 291530' 00:08:26.704 killing process with pid 291530 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 291530 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 291530 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:26.704 21:30:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.241 21:30:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:29.241 00:08:29.241 real 0m7.143s 00:08:29.241 user 0m11.119s 00:08:29.241 sys 0m2.157s 00:08:29.241 21:30:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.241 21:30:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.241 ************************************ 00:08:29.241 END TEST nvmf_abort 00:08:29.241 ************************************ 00:08:29.241 21:30:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:29.241 21:30:19 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:29.241 21:30:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:29.241 21:30:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.241 21:30:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:29.241 ************************************ 00:08:29.241 START TEST nvmf_ns_hotplug_stress 00:08:29.241 ************************************ 00:08:29.241 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:29.241 * Looking for test storage... 00:08:29.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.241 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.241 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:29.241 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.241 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.241 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.241 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.241 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.241 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.241 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.241 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.241 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.241 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:29.242 21:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.621 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:30.621 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:30.621 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:30.621 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:30.621 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:30.621 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:30.621 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:30.621 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:30.621 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:30.621 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:08:30.621 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:30.622 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:30.622 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:30.622 Found net devices under 0000:08:00.0: cvl_0_0 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:30.622 Found net devices under 0000:08:00.1: cvl_0_1 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:30.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:08:30.622 00:08:30.622 --- 10.0.0.2 ping statistics --- 00:08:30.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.622 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:30.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:08:30.622 00:08:30.622 --- 10.0.0.1 ping statistics --- 00:08:30.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.622 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:30.622 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:30.881 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:30.881 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:30.881 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.881 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=293252 00:08:30.881 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:30.881 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 293252 00:08:30.881 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 293252 ']' 00:08:30.881 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.881 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:30.881 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.881 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:30.881 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.881 [2024-07-15 21:30:21.471328] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:08:30.881 [2024-07-15 21:30:21.471415] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.881 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.881 [2024-07-15 21:30:21.537543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:30.882 [2024-07-15 21:30:21.653477] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.882 [2024-07-15 21:30:21.653533] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.882 [2024-07-15 21:30:21.653549] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.882 [2024-07-15 21:30:21.653563] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.882 [2024-07-15 21:30:21.653575] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.882 [2024-07-15 21:30:21.653675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.882 [2024-07-15 21:30:21.653785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.882 [2024-07-15 21:30:21.653806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.140 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:31.140 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:08:31.140 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:31.140 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:31.140 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.140 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.140 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:31.140 21:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:31.397 [2024-07-15 21:30:22.064291] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.397 21:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:31.654 21:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:31.910 [2024-07-15 21:30:22.654884] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.911 21:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:32.475 21:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:32.475 Malloc0 00:08:32.732 21:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:32.988 Delay0 00:08:32.988 21:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.245 21:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:33.501 NULL1 00:08:33.502 21:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:33.759 21:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=293576 00:08:33.759 21:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:33.759 21:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.759 21:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:33.759 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.017 21:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.274 21:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:34.274 21:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:34.274 true 00:08:34.531 21:30:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:34.531 21:30:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.531 21:30:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.789 21:30:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:34.789 21:30:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:35.046 true 00:08:35.046 21:30:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:35.047 21:30:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.978 Read completed with error (sct=0, sc=11) 00:08:35.978 21:30:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.978 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.978 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:35.978 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.236 21:30:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:36.236 21:30:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:36.494 true 00:08:36.494 21:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:36.494 21:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.752 21:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.318 21:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:37.318 21:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:37.318 true 00:08:37.318 21:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:37.318 21:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.575 21:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.833 21:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:37.833 21:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:38.090 true 00:08:38.090 21:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:38.090 21:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.460 21:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.460 21:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:39.460 21:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:39.716 true 00:08:39.716 21:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:39.716 21:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.279 21:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.279 21:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:40.279 21:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:40.535 true 00:08:40.535 21:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:40.535 21:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.791 21:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.047 21:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:41.047 21:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:41.303 true 00:08:41.303 21:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:41.303 21:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.252 21:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.552 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:42.552 21:30:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:42.552 21:30:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:42.862 true 00:08:42.862 21:30:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:42.862 21:30:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.140 21:30:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.397 21:30:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:43.397 21:30:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:43.668 true 00:08:43.668 21:30:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:43.668 21:30:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.607 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.607 21:30:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.865 21:30:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:44.865 21:30:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:45.122 true 00:08:45.122 21:30:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:45.122 21:30:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.378 21:30:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.636 21:30:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:45.636 21:30:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:45.894 true 00:08:45.894 21:30:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:45.894 21:30:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.152 21:30:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.409 21:30:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:46.409 21:30:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:46.667 true 00:08:46.667 21:30:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:46.667 21:30:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.599 21:30:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.857 21:30:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:47.857 21:30:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:48.114 true 00:08:48.114 21:30:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:48.114 21:30:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.371 21:30:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.628 21:30:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:48.628 21:30:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:48.886 true 00:08:48.886 21:30:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:48.886 21:30:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.143 21:30:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.421 21:30:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:49.421 21:30:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:49.679 true 00:08:49.679 21:30:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:49.679 21:30:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.610 21:30:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.868 21:30:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:50.868 21:30:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:51.433 true 00:08:51.433 21:30:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:51.433 21:30:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.690 21:30:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.947 21:30:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:51.947 21:30:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:52.204 true 00:08:52.204 21:30:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:52.204 21:30:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.461 21:30:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.717 21:30:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:52.717 21:30:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:52.975 true 00:08:52.975 21:30:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:52.975 21:30:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.908 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.908 21:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.908 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.908 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.165 21:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:54.165 21:30:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:54.423 true 00:08:54.423 21:30:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:54.423 21:30:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.680 21:30:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.938 21:30:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:54.938 21:30:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:55.196 true 00:08:55.196 21:30:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:55.196 21:30:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.454 21:30:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.712 21:30:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:55.712 21:30:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:55.969 true 00:08:55.969 21:30:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:55.969 21:30:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.900 21:30:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.158 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.158 21:30:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:57.158 21:30:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:57.415 true 00:08:57.672 21:30:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:57.672 21:30:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.930 21:30:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.188 21:30:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:58.188 21:30:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:58.445 true 00:08:58.445 21:30:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:58.445 21:30:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.701 21:30:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.963 21:30:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:58.963 21:30:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:59.219 true 00:08:59.219 21:30:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:08:59.219 21:30:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.151 21:30:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.408 21:30:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:00.408 21:30:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:00.665 true 00:09:00.665 21:30:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:09:00.665 21:30:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.922 21:30:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.179 21:30:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:01.179 21:30:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:01.436 true 00:09:01.436 21:30:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:09:01.436 21:30:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.998 21:30:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.998 21:30:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:01.998 21:30:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:02.562 true 00:09:02.562 21:30:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:09:02.562 21:30:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.492 21:30:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.492 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.492 21:30:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:03.492 21:30:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:03.749 true 00:09:03.749 21:30:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:09:03.749 21:30:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.007 21:30:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.007 Initializing NVMe Controllers 00:09:04.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:04.007 Controller IO queue size 128, less than required. 00:09:04.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:04.007 Controller IO queue size 128, less than required. 00:09:04.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:04.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:04.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:04.007 Initialization complete. Launching workers. 00:09:04.007 ======================================================== 00:09:04.007 Latency(us) 00:09:04.007 Device Information : IOPS MiB/s Average min max 00:09:04.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 650.51 0.32 73971.19 2262.53 1010612.43 00:09:04.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8020.85 3.92 15910.79 3309.90 539211.30 00:09:04.007 ======================================================== 00:09:04.007 Total : 8671.36 4.23 20266.37 2262.53 1010612.43 00:09:04.007 00:09:04.264 21:30:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:04.264 21:30:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:04.521 true 00:09:04.521 21:30:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 293576 00:09:04.522 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (293576) - No such process 00:09:04.522 21:30:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 293576 00:09:04.522 21:30:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.778 21:30:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:05.036 21:30:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:05.036 21:30:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:05.036 21:30:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:05.036 21:30:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:05.036 21:30:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:05.293 null0 00:09:05.293 21:30:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:05.293 21:30:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:05.293 21:30:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:05.550 null1 00:09:05.550 21:30:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:05.550 21:30:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:05.550 21:30:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:05.807 null2 00:09:05.807 21:30:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:05.807 21:30:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:05.807 21:30:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:06.065 null3 00:09:06.065 21:30:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:06.065 21:30:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:06.065 21:30:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:06.065 null4 00:09:06.323 21:30:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:06.323 21:30:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:06.323 21:30:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:06.323 null5 00:09:06.323 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:06.323 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:06.323 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:06.580 null6 00:09:06.580 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:06.580 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:06.580 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:06.838 null7 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 296767 296768 296770 296772 296774 296776 296778 296780 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:06.838 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:07.096 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:07.096 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:07.096 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:07.096 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.096 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:07.096 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:07.096 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:07.096 21:30:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:07.354 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.354 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.354 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:07.354 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.354 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.354 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:07.354 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.354 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.354 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:07.354 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.354 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.354 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:07.354 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.354 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.354 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:07.354 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.354 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.354 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:07.354 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.354 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.354 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:07.354 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.354 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.354 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:07.612 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:07.612 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:07.612 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:07.612 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:07.612 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:07.612 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:07.869 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.869 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:07.869 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.869 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.869 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:07.869 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.869 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.869 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:07.869 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.869 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.869 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:07.869 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.869 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.869 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:07.869 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.869 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.869 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:07.869 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:07.869 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:07.869 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:08.127 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.127 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.127 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:08.127 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.127 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.127 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:08.127 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:08.127 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:08.127 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:08.127 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:08.384 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:08.384 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.384 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:08.384 21:30:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:08.384 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.384 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.384 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:08.384 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.384 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.384 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:08.384 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.384 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.384 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:08.641 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.641 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.641 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:08.641 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.641 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.642 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.642 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:08.642 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.642 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:08.642 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.642 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.642 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:08.642 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.642 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.642 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:08.642 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:08.642 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:08.898 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:08.898 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.898 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:08.898 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:08.898 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:08.898 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:08.898 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.898 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.898 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:08.898 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:08.898 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:08.898 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:09.156 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.156 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.156 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:09.156 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.156 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.156 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:09.156 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.156 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.156 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.156 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:09.156 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.156 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:09.156 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.156 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.156 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:09.156 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.156 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.156 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:09.156 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:09.156 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:09.156 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:09.413 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:09.413 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.413 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:09.413 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:09.413 21:30:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:09.413 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.413 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.413 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:09.413 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.413 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.413 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:09.413 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.413 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.413 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:09.670 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.670 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.670 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:09.670 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.670 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.670 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:09.670 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.670 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.670 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:09.670 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.670 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.670 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:09.670 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.670 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.670 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:09.670 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:09.670 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:09.927 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:09.927 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.927 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:09.927 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:09.927 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:09.927 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:09.927 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.927 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.927 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:09.927 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:09.927 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:09.927 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:10.183 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:10.183 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:10.183 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:10.183 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:10.183 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:10.183 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:10.183 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:10.183 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:10.183 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:10.183 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:10.183 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:10.183 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:10.183 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:10.183 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:10.183 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:10.183 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:10.183 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:10.183 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:10.183 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:10.183 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:10.440 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:10.440 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.440 21:31:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:10.440 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:10.440 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:10.440 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:10.440 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:10.440 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:10.440 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:10.440 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:10.440 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:10.440 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:10.696 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:10.696 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:10.696 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:10.696 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:10.696 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:10.696 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:10.696 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:10.697 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:10.697 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:10.697 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:10.697 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:10.697 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:10.697 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:10.697 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:10.697 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:10.697 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:10.697 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:10.697 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:10.697 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:10.697 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:10.954 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:10.954 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.954 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:10.954 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:10.954 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:10.954 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:10.954 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:10.954 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:10.954 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:11.211 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.211 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.211 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:11.211 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.211 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.211 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:11.211 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.211 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.211 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:11.211 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.211 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.211 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:11.211 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.211 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.211 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:11.211 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.211 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.211 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:11.211 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.211 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.211 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:11.211 21:31:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:11.468 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:11.468 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:11.468 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.468 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:11.468 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:11.468 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:11.468 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:11.468 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.468 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.468 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:11.726 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.726 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.726 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:11.726 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.726 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.726 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:11.726 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.726 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.726 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:11.726 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.726 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.726 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:11.726 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.726 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.726 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:11.726 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.726 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.726 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:11.726 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.726 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:11.726 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:11.726 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:11.983 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:11.983 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:11.983 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.983 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:11.983 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:11.983 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:11.983 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:11.983 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:11.983 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:12.241 rmmod nvme_tcp 00:09:12.241 rmmod nvme_fabrics 00:09:12.241 rmmod nvme_keyring 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 293252 ']' 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 293252 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 293252 ']' 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 293252 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 293252 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 293252' 00:09:12.241 killing process with pid 293252 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 293252 00:09:12.241 21:31:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 293252 00:09:12.500 21:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:12.500 21:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:12.500 21:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:12.500 21:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:12.500 21:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:12.500 21:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.500 21:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:12.500 21:31:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.404 21:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:14.404 00:09:14.404 real 0m45.645s 00:09:14.404 user 3m31.680s 00:09:14.404 sys 0m14.671s 00:09:14.404 21:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:14.404 21:31:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:14.404 ************************************ 00:09:14.404 END TEST nvmf_ns_hotplug_stress 00:09:14.404 ************************************ 00:09:14.404 21:31:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:14.404 21:31:05 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:14.404 21:31:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:14.404 21:31:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.404 21:31:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:14.663 ************************************ 00:09:14.663 START TEST nvmf_connect_stress 00:09:14.663 ************************************ 00:09:14.663 21:31:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:14.663 * Looking for test storage... 00:09:14.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:14.663 21:31:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:14.663 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:09:14.663 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.663 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:14.664 21:31:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:16.566 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:16.566 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:16.566 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:16.566 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:16.566 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:16.566 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:16.566 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:16.566 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:16.566 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:16.566 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:16.567 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:16.567 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:16.567 Found net devices under 0000:08:00.0: cvl_0_0 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:16.567 Found net devices under 0000:08:00.1: cvl_0_1 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:16.567 21:31:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:16.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:16.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:09:16.567 00:09:16.567 --- 10.0.0.2 ping statistics --- 00:09:16.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.567 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:16.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:16.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:09:16.567 00:09:16.567 --- 10.0.0.1 ping statistics --- 00:09:16.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.567 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=298841 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 298841 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 298841 ']' 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:16.567 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:16.567 [2024-07-15 21:31:07.132273] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:09:16.567 [2024-07-15 21:31:07.132364] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.567 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.567 [2024-07-15 21:31:07.196795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:16.567 [2024-07-15 21:31:07.312547] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.567 [2024-07-15 21:31:07.312594] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.567 [2024-07-15 21:31:07.312610] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.568 [2024-07-15 21:31:07.312625] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.568 [2024-07-15 21:31:07.312637] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.568 [2024-07-15 21:31:07.312734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.568 [2024-07-15 21:31:07.312802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.568 [2024-07-15 21:31:07.312798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:16.826 [2024-07-15 21:31:07.449322] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:16.826 [2024-07-15 21:31:07.485292] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:16.826 NULL1 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=298950 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:16.826 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.826 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:17.083 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.083 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:17.083 21:31:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:17.083 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.083 21:31:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:17.647 21:31:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.647 21:31:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:17.647 21:31:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:17.647 21:31:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.648 21:31:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:17.904 21:31:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.904 21:31:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:17.905 21:31:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:17.905 21:31:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.905 21:31:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:18.161 21:31:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.161 21:31:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:18.161 21:31:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:18.161 21:31:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.161 21:31:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:18.418 21:31:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.418 21:31:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:18.418 21:31:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:18.418 21:31:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.418 21:31:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:18.990 21:31:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.990 21:31:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:18.990 21:31:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:18.990 21:31:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.990 21:31:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:19.248 21:31:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.248 21:31:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:19.248 21:31:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:19.248 21:31:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.248 21:31:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:19.505 21:31:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.505 21:31:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:19.505 21:31:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:19.505 21:31:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.505 21:31:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:19.762 21:31:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.762 21:31:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:19.762 21:31:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:19.762 21:31:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.762 21:31:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:20.019 21:31:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.019 21:31:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:20.019 21:31:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:20.020 21:31:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.020 21:31:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:20.584 21:31:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.584 21:31:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:20.584 21:31:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:20.584 21:31:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.584 21:31:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:20.842 21:31:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.842 21:31:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:20.842 21:31:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:20.842 21:31:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.842 21:31:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:21.099 21:31:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.099 21:31:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:21.099 21:31:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:21.099 21:31:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.099 21:31:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:21.356 21:31:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.356 21:31:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:21.356 21:31:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:21.356 21:31:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.356 21:31:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:21.613 21:31:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.613 21:31:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:21.613 21:31:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:21.613 21:31:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.613 21:31:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:22.178 21:31:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.178 21:31:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:22.178 21:31:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:22.178 21:31:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.178 21:31:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:22.435 21:31:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.435 21:31:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:22.435 21:31:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:22.435 21:31:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.435 21:31:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:22.692 21:31:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.692 21:31:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:22.692 21:31:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:22.692 21:31:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.692 21:31:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:22.949 21:31:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.949 21:31:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:22.949 21:31:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:22.949 21:31:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.949 21:31:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:23.206 21:31:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.206 21:31:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:23.206 21:31:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:23.206 21:31:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.206 21:31:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:23.768 21:31:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.768 21:31:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:23.768 21:31:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:23.768 21:31:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.768 21:31:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.026 21:31:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.026 21:31:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:24.026 21:31:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:24.026 21:31:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.026 21:31:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.283 21:31:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.283 21:31:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:24.283 21:31:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:24.283 21:31:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.283 21:31:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.540 21:31:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.540 21:31:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:24.540 21:31:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:24.540 21:31:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.540 21:31:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.798 21:31:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.798 21:31:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:24.798 21:31:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:24.798 21:31:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.798 21:31:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.363 21:31:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.364 21:31:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:25.364 21:31:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:25.364 21:31:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.364 21:31:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.621 21:31:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.621 21:31:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:25.621 21:31:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:25.621 21:31:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.621 21:31:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.879 21:31:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.879 21:31:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:25.879 21:31:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:25.879 21:31:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.879 21:31:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.137 21:31:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.137 21:31:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:26.137 21:31:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:26.137 21:31:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.137 21:31:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.702 21:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.702 21:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:26.702 21:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:26.702 21:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.702 21:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.960 21:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.960 21:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:26.960 21:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:26.960 21:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.960 21:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:26.960 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 298950 00:09:27.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (298950) - No such process 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 298950 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:27.218 rmmod nvme_tcp 00:09:27.218 rmmod nvme_fabrics 00:09:27.218 rmmod nvme_keyring 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 298841 ']' 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 298841 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 298841 ']' 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 298841 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 298841 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 298841' 00:09:27.218 killing process with pid 298841 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 298841 00:09:27.218 21:31:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 298841 00:09:27.478 21:31:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:27.478 21:31:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:27.478 21:31:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:27.478 21:31:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:27.478 21:31:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:27.478 21:31:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.478 21:31:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.478 21:31:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.386 21:31:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:29.386 00:09:29.386 real 0m14.933s 00:09:29.386 user 0m39.754s 00:09:29.386 sys 0m4.205s 00:09:29.386 21:31:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:29.386 21:31:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:29.386 ************************************ 00:09:29.386 END TEST nvmf_connect_stress 00:09:29.386 ************************************ 00:09:29.386 21:31:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:29.386 21:31:20 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:29.386 21:31:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:29.386 21:31:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.386 21:31:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:29.644 ************************************ 00:09:29.644 START TEST nvmf_fused_ordering 00:09:29.644 ************************************ 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:29.644 * Looking for test storage... 00:09:29.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:09:29.644 21:31:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:31.547 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.547 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:09:31.547 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:31.547 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:31.547 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:31.547 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:31.547 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:31.547 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:09:31.547 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:31.547 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:09:31.547 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:31.548 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:31.548 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:31.548 Found net devices under 0000:08:00.0: cvl_0_0 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:31.548 Found net devices under 0000:08:00.1: cvl_0_1 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:31.548 21:31:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:31.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:09:31.548 00:09:31.548 --- 10.0.0.2 ping statistics --- 00:09:31.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.548 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:09:31.548 00:09:31.548 --- 10.0.0.1 ping statistics --- 00:09:31.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.548 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=301375 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 301375 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 301375 ']' 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:31.548 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:31.548 [2024-07-15 21:31:22.131169] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:09:31.548 [2024-07-15 21:31:22.131257] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.548 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.548 [2024-07-15 21:31:22.195267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.548 [2024-07-15 21:31:22.310378] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.548 [2024-07-15 21:31:22.310434] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.548 [2024-07-15 21:31:22.310451] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.548 [2024-07-15 21:31:22.310464] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.548 [2024-07-15 21:31:22.310476] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.548 [2024-07-15 21:31:22.310510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:31.806 [2024-07-15 21:31:22.450238] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:31.806 [2024-07-15 21:31:22.466372] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:31.806 NULL1 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.806 21:31:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:31.806 [2024-07-15 21:31:22.512901] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:09:31.806 [2024-07-15 21:31:22.512950] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid301395 ] 00:09:31.806 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.371 Attached to nqn.2016-06.io.spdk:cnode1 00:09:32.371 Namespace ID: 1 size: 1GB 00:09:32.371 fused_ordering(0) 00:09:32.371 fused_ordering(1) 00:09:32.371 fused_ordering(2) 00:09:32.371 fused_ordering(3) 00:09:32.371 fused_ordering(4) 00:09:32.371 fused_ordering(5) 00:09:32.371 fused_ordering(6) 00:09:32.371 fused_ordering(7) 00:09:32.371 fused_ordering(8) 00:09:32.371 fused_ordering(9) 00:09:32.371 fused_ordering(10) 00:09:32.371 fused_ordering(11) 00:09:32.371 fused_ordering(12) 00:09:32.371 fused_ordering(13) 00:09:32.371 fused_ordering(14) 00:09:32.371 fused_ordering(15) 00:09:32.371 fused_ordering(16) 00:09:32.371 fused_ordering(17) 00:09:32.371 fused_ordering(18) 00:09:32.371 fused_ordering(19) 00:09:32.371 fused_ordering(20) 00:09:32.371 fused_ordering(21) 00:09:32.371 fused_ordering(22) 00:09:32.371 fused_ordering(23) 00:09:32.371 fused_ordering(24) 00:09:32.371 fused_ordering(25) 00:09:32.371 fused_ordering(26) 00:09:32.371 fused_ordering(27) 00:09:32.371 fused_ordering(28) 00:09:32.371 fused_ordering(29) 00:09:32.371 fused_ordering(30) 00:09:32.371 fused_ordering(31) 00:09:32.371 fused_ordering(32) 00:09:32.371 fused_ordering(33) 00:09:32.371 fused_ordering(34) 00:09:32.371 fused_ordering(35) 00:09:32.371 fused_ordering(36) 00:09:32.371 fused_ordering(37) 00:09:32.371 fused_ordering(38) 00:09:32.371 fused_ordering(39) 00:09:32.371 fused_ordering(40) 00:09:32.371 fused_ordering(41) 00:09:32.371 fused_ordering(42) 00:09:32.371 fused_ordering(43) 00:09:32.371 fused_ordering(44) 00:09:32.371 fused_ordering(45) 00:09:32.371 fused_ordering(46) 00:09:32.371 fused_ordering(47) 00:09:32.371 fused_ordering(48) 00:09:32.371 fused_ordering(49) 00:09:32.371 fused_ordering(50) 00:09:32.371 fused_ordering(51) 00:09:32.371 fused_ordering(52) 00:09:32.371 fused_ordering(53) 00:09:32.371 fused_ordering(54) 00:09:32.371 fused_ordering(55) 00:09:32.371 fused_ordering(56) 00:09:32.371 fused_ordering(57) 00:09:32.371 fused_ordering(58) 00:09:32.371 fused_ordering(59) 00:09:32.371 fused_ordering(60) 00:09:32.371 fused_ordering(61) 00:09:32.371 fused_ordering(62) 00:09:32.371 fused_ordering(63) 00:09:32.371 fused_ordering(64) 00:09:32.371 fused_ordering(65) 00:09:32.371 fused_ordering(66) 00:09:32.371 fused_ordering(67) 00:09:32.371 fused_ordering(68) 00:09:32.371 fused_ordering(69) 00:09:32.371 fused_ordering(70) 00:09:32.371 fused_ordering(71) 00:09:32.371 fused_ordering(72) 00:09:32.371 fused_ordering(73) 00:09:32.371 fused_ordering(74) 00:09:32.371 fused_ordering(75) 00:09:32.371 fused_ordering(76) 00:09:32.371 fused_ordering(77) 00:09:32.371 fused_ordering(78) 00:09:32.371 fused_ordering(79) 00:09:32.371 fused_ordering(80) 00:09:32.371 fused_ordering(81) 00:09:32.371 fused_ordering(82) 00:09:32.371 fused_ordering(83) 00:09:32.371 fused_ordering(84) 00:09:32.371 fused_ordering(85) 00:09:32.371 fused_ordering(86) 00:09:32.371 fused_ordering(87) 00:09:32.371 fused_ordering(88) 00:09:32.371 fused_ordering(89) 00:09:32.371 fused_ordering(90) 00:09:32.371 fused_ordering(91) 00:09:32.371 fused_ordering(92) 00:09:32.371 fused_ordering(93) 00:09:32.371 fused_ordering(94) 00:09:32.371 fused_ordering(95) 00:09:32.371 fused_ordering(96) 00:09:32.371 fused_ordering(97) 00:09:32.371 fused_ordering(98) 00:09:32.371 fused_ordering(99) 00:09:32.371 fused_ordering(100) 00:09:32.371 fused_ordering(101) 00:09:32.371 fused_ordering(102) 00:09:32.371 fused_ordering(103) 00:09:32.371 fused_ordering(104) 00:09:32.371 fused_ordering(105) 00:09:32.371 fused_ordering(106) 00:09:32.371 fused_ordering(107) 00:09:32.371 fused_ordering(108) 00:09:32.371 fused_ordering(109) 00:09:32.371 fused_ordering(110) 00:09:32.371 fused_ordering(111) 00:09:32.371 fused_ordering(112) 00:09:32.371 fused_ordering(113) 00:09:32.371 fused_ordering(114) 00:09:32.371 fused_ordering(115) 00:09:32.371 fused_ordering(116) 00:09:32.371 fused_ordering(117) 00:09:32.371 fused_ordering(118) 00:09:32.371 fused_ordering(119) 00:09:32.371 fused_ordering(120) 00:09:32.371 fused_ordering(121) 00:09:32.371 fused_ordering(122) 00:09:32.371 fused_ordering(123) 00:09:32.371 fused_ordering(124) 00:09:32.371 fused_ordering(125) 00:09:32.371 fused_ordering(126) 00:09:32.371 fused_ordering(127) 00:09:32.371 fused_ordering(128) 00:09:32.371 fused_ordering(129) 00:09:32.371 fused_ordering(130) 00:09:32.371 fused_ordering(131) 00:09:32.371 fused_ordering(132) 00:09:32.371 fused_ordering(133) 00:09:32.371 fused_ordering(134) 00:09:32.371 fused_ordering(135) 00:09:32.371 fused_ordering(136) 00:09:32.371 fused_ordering(137) 00:09:32.371 fused_ordering(138) 00:09:32.371 fused_ordering(139) 00:09:32.371 fused_ordering(140) 00:09:32.371 fused_ordering(141) 00:09:32.371 fused_ordering(142) 00:09:32.371 fused_ordering(143) 00:09:32.371 fused_ordering(144) 00:09:32.371 fused_ordering(145) 00:09:32.371 fused_ordering(146) 00:09:32.371 fused_ordering(147) 00:09:32.371 fused_ordering(148) 00:09:32.371 fused_ordering(149) 00:09:32.371 fused_ordering(150) 00:09:32.371 fused_ordering(151) 00:09:32.371 fused_ordering(152) 00:09:32.371 fused_ordering(153) 00:09:32.371 fused_ordering(154) 00:09:32.371 fused_ordering(155) 00:09:32.371 fused_ordering(156) 00:09:32.371 fused_ordering(157) 00:09:32.371 fused_ordering(158) 00:09:32.371 fused_ordering(159) 00:09:32.371 fused_ordering(160) 00:09:32.371 fused_ordering(161) 00:09:32.371 fused_ordering(162) 00:09:32.371 fused_ordering(163) 00:09:32.371 fused_ordering(164) 00:09:32.371 fused_ordering(165) 00:09:32.371 fused_ordering(166) 00:09:32.371 fused_ordering(167) 00:09:32.371 fused_ordering(168) 00:09:32.371 fused_ordering(169) 00:09:32.371 fused_ordering(170) 00:09:32.371 fused_ordering(171) 00:09:32.371 fused_ordering(172) 00:09:32.371 fused_ordering(173) 00:09:32.371 fused_ordering(174) 00:09:32.371 fused_ordering(175) 00:09:32.371 fused_ordering(176) 00:09:32.371 fused_ordering(177) 00:09:32.371 fused_ordering(178) 00:09:32.371 fused_ordering(179) 00:09:32.371 fused_ordering(180) 00:09:32.371 fused_ordering(181) 00:09:32.371 fused_ordering(182) 00:09:32.371 fused_ordering(183) 00:09:32.371 fused_ordering(184) 00:09:32.371 fused_ordering(185) 00:09:32.371 fused_ordering(186) 00:09:32.371 fused_ordering(187) 00:09:32.371 fused_ordering(188) 00:09:32.371 fused_ordering(189) 00:09:32.371 fused_ordering(190) 00:09:32.371 fused_ordering(191) 00:09:32.371 fused_ordering(192) 00:09:32.371 fused_ordering(193) 00:09:32.371 fused_ordering(194) 00:09:32.371 fused_ordering(195) 00:09:32.371 fused_ordering(196) 00:09:32.371 fused_ordering(197) 00:09:32.371 fused_ordering(198) 00:09:32.371 fused_ordering(199) 00:09:32.371 fused_ordering(200) 00:09:32.371 fused_ordering(201) 00:09:32.371 fused_ordering(202) 00:09:32.371 fused_ordering(203) 00:09:32.371 fused_ordering(204) 00:09:32.371 fused_ordering(205) 00:09:32.628 fused_ordering(206) 00:09:32.628 fused_ordering(207) 00:09:32.628 fused_ordering(208) 00:09:32.628 fused_ordering(209) 00:09:32.628 fused_ordering(210) 00:09:32.628 fused_ordering(211) 00:09:32.628 fused_ordering(212) 00:09:32.628 fused_ordering(213) 00:09:32.628 fused_ordering(214) 00:09:32.628 fused_ordering(215) 00:09:32.628 fused_ordering(216) 00:09:32.628 fused_ordering(217) 00:09:32.629 fused_ordering(218) 00:09:32.629 fused_ordering(219) 00:09:32.629 fused_ordering(220) 00:09:32.629 fused_ordering(221) 00:09:32.629 fused_ordering(222) 00:09:32.629 fused_ordering(223) 00:09:32.629 fused_ordering(224) 00:09:32.629 fused_ordering(225) 00:09:32.629 fused_ordering(226) 00:09:32.629 fused_ordering(227) 00:09:32.629 fused_ordering(228) 00:09:32.629 fused_ordering(229) 00:09:32.629 fused_ordering(230) 00:09:32.629 fused_ordering(231) 00:09:32.629 fused_ordering(232) 00:09:32.629 fused_ordering(233) 00:09:32.629 fused_ordering(234) 00:09:32.629 fused_ordering(235) 00:09:32.629 fused_ordering(236) 00:09:32.629 fused_ordering(237) 00:09:32.629 fused_ordering(238) 00:09:32.629 fused_ordering(239) 00:09:32.629 fused_ordering(240) 00:09:32.629 fused_ordering(241) 00:09:32.629 fused_ordering(242) 00:09:32.629 fused_ordering(243) 00:09:32.629 fused_ordering(244) 00:09:32.629 fused_ordering(245) 00:09:32.629 fused_ordering(246) 00:09:32.629 fused_ordering(247) 00:09:32.629 fused_ordering(248) 00:09:32.629 fused_ordering(249) 00:09:32.629 fused_ordering(250) 00:09:32.629 fused_ordering(251) 00:09:32.629 fused_ordering(252) 00:09:32.629 fused_ordering(253) 00:09:32.629 fused_ordering(254) 00:09:32.629 fused_ordering(255) 00:09:32.629 fused_ordering(256) 00:09:32.629 fused_ordering(257) 00:09:32.629 fused_ordering(258) 00:09:32.629 fused_ordering(259) 00:09:32.629 fused_ordering(260) 00:09:32.629 fused_ordering(261) 00:09:32.629 fused_ordering(262) 00:09:32.629 fused_ordering(263) 00:09:32.629 fused_ordering(264) 00:09:32.629 fused_ordering(265) 00:09:32.629 fused_ordering(266) 00:09:32.629 fused_ordering(267) 00:09:32.629 fused_ordering(268) 00:09:32.629 fused_ordering(269) 00:09:32.629 fused_ordering(270) 00:09:32.629 fused_ordering(271) 00:09:32.629 fused_ordering(272) 00:09:32.629 fused_ordering(273) 00:09:32.629 fused_ordering(274) 00:09:32.629 fused_ordering(275) 00:09:32.629 fused_ordering(276) 00:09:32.629 fused_ordering(277) 00:09:32.629 fused_ordering(278) 00:09:32.629 fused_ordering(279) 00:09:32.629 fused_ordering(280) 00:09:32.629 fused_ordering(281) 00:09:32.629 fused_ordering(282) 00:09:32.629 fused_ordering(283) 00:09:32.629 fused_ordering(284) 00:09:32.629 fused_ordering(285) 00:09:32.629 fused_ordering(286) 00:09:32.629 fused_ordering(287) 00:09:32.629 fused_ordering(288) 00:09:32.629 fused_ordering(289) 00:09:32.629 fused_ordering(290) 00:09:32.629 fused_ordering(291) 00:09:32.629 fused_ordering(292) 00:09:32.629 fused_ordering(293) 00:09:32.629 fused_ordering(294) 00:09:32.629 fused_ordering(295) 00:09:32.629 fused_ordering(296) 00:09:32.629 fused_ordering(297) 00:09:32.629 fused_ordering(298) 00:09:32.629 fused_ordering(299) 00:09:32.629 fused_ordering(300) 00:09:32.629 fused_ordering(301) 00:09:32.629 fused_ordering(302) 00:09:32.629 fused_ordering(303) 00:09:32.629 fused_ordering(304) 00:09:32.629 fused_ordering(305) 00:09:32.629 fused_ordering(306) 00:09:32.629 fused_ordering(307) 00:09:32.629 fused_ordering(308) 00:09:32.629 fused_ordering(309) 00:09:32.629 fused_ordering(310) 00:09:32.629 fused_ordering(311) 00:09:32.629 fused_ordering(312) 00:09:32.629 fused_ordering(313) 00:09:32.629 fused_ordering(314) 00:09:32.629 fused_ordering(315) 00:09:32.629 fused_ordering(316) 00:09:32.629 fused_ordering(317) 00:09:32.629 fused_ordering(318) 00:09:32.629 fused_ordering(319) 00:09:32.629 fused_ordering(320) 00:09:32.629 fused_ordering(321) 00:09:32.629 fused_ordering(322) 00:09:32.629 fused_ordering(323) 00:09:32.629 fused_ordering(324) 00:09:32.629 fused_ordering(325) 00:09:32.629 fused_ordering(326) 00:09:32.629 fused_ordering(327) 00:09:32.629 fused_ordering(328) 00:09:32.629 fused_ordering(329) 00:09:32.629 fused_ordering(330) 00:09:32.629 fused_ordering(331) 00:09:32.629 fused_ordering(332) 00:09:32.629 fused_ordering(333) 00:09:32.629 fused_ordering(334) 00:09:32.629 fused_ordering(335) 00:09:32.629 fused_ordering(336) 00:09:32.629 fused_ordering(337) 00:09:32.629 fused_ordering(338) 00:09:32.629 fused_ordering(339) 00:09:32.629 fused_ordering(340) 00:09:32.629 fused_ordering(341) 00:09:32.629 fused_ordering(342) 00:09:32.629 fused_ordering(343) 00:09:32.629 fused_ordering(344) 00:09:32.629 fused_ordering(345) 00:09:32.629 fused_ordering(346) 00:09:32.629 fused_ordering(347) 00:09:32.629 fused_ordering(348) 00:09:32.629 fused_ordering(349) 00:09:32.629 fused_ordering(350) 00:09:32.629 fused_ordering(351) 00:09:32.629 fused_ordering(352) 00:09:32.629 fused_ordering(353) 00:09:32.629 fused_ordering(354) 00:09:32.629 fused_ordering(355) 00:09:32.629 fused_ordering(356) 00:09:32.629 fused_ordering(357) 00:09:32.629 fused_ordering(358) 00:09:32.629 fused_ordering(359) 00:09:32.629 fused_ordering(360) 00:09:32.629 fused_ordering(361) 00:09:32.629 fused_ordering(362) 00:09:32.629 fused_ordering(363) 00:09:32.629 fused_ordering(364) 00:09:32.629 fused_ordering(365) 00:09:32.629 fused_ordering(366) 00:09:32.629 fused_ordering(367) 00:09:32.629 fused_ordering(368) 00:09:32.629 fused_ordering(369) 00:09:32.629 fused_ordering(370) 00:09:32.629 fused_ordering(371) 00:09:32.629 fused_ordering(372) 00:09:32.629 fused_ordering(373) 00:09:32.629 fused_ordering(374) 00:09:32.629 fused_ordering(375) 00:09:32.629 fused_ordering(376) 00:09:32.629 fused_ordering(377) 00:09:32.629 fused_ordering(378) 00:09:32.629 fused_ordering(379) 00:09:32.629 fused_ordering(380) 00:09:32.629 fused_ordering(381) 00:09:32.629 fused_ordering(382) 00:09:32.629 fused_ordering(383) 00:09:32.629 fused_ordering(384) 00:09:32.629 fused_ordering(385) 00:09:32.629 fused_ordering(386) 00:09:32.629 fused_ordering(387) 00:09:32.629 fused_ordering(388) 00:09:32.629 fused_ordering(389) 00:09:32.629 fused_ordering(390) 00:09:32.629 fused_ordering(391) 00:09:32.629 fused_ordering(392) 00:09:32.629 fused_ordering(393) 00:09:32.629 fused_ordering(394) 00:09:32.629 fused_ordering(395) 00:09:32.629 fused_ordering(396) 00:09:32.629 fused_ordering(397) 00:09:32.629 fused_ordering(398) 00:09:32.629 fused_ordering(399) 00:09:32.629 fused_ordering(400) 00:09:32.629 fused_ordering(401) 00:09:32.629 fused_ordering(402) 00:09:32.629 fused_ordering(403) 00:09:32.629 fused_ordering(404) 00:09:32.629 fused_ordering(405) 00:09:32.629 fused_ordering(406) 00:09:32.629 fused_ordering(407) 00:09:32.629 fused_ordering(408) 00:09:32.629 fused_ordering(409) 00:09:32.629 fused_ordering(410) 00:09:32.886 fused_ordering(411) 00:09:32.886 fused_ordering(412) 00:09:32.886 fused_ordering(413) 00:09:32.886 fused_ordering(414) 00:09:32.886 fused_ordering(415) 00:09:32.886 fused_ordering(416) 00:09:32.886 fused_ordering(417) 00:09:32.886 fused_ordering(418) 00:09:32.886 fused_ordering(419) 00:09:32.886 fused_ordering(420) 00:09:32.886 fused_ordering(421) 00:09:32.886 fused_ordering(422) 00:09:32.886 fused_ordering(423) 00:09:32.886 fused_ordering(424) 00:09:32.886 fused_ordering(425) 00:09:32.886 fused_ordering(426) 00:09:32.886 fused_ordering(427) 00:09:32.886 fused_ordering(428) 00:09:32.886 fused_ordering(429) 00:09:32.886 fused_ordering(430) 00:09:32.886 fused_ordering(431) 00:09:32.886 fused_ordering(432) 00:09:32.886 fused_ordering(433) 00:09:32.886 fused_ordering(434) 00:09:32.886 fused_ordering(435) 00:09:32.886 fused_ordering(436) 00:09:32.886 fused_ordering(437) 00:09:32.886 fused_ordering(438) 00:09:32.886 fused_ordering(439) 00:09:32.886 fused_ordering(440) 00:09:32.886 fused_ordering(441) 00:09:32.886 fused_ordering(442) 00:09:32.886 fused_ordering(443) 00:09:32.886 fused_ordering(444) 00:09:32.886 fused_ordering(445) 00:09:32.886 fused_ordering(446) 00:09:32.886 fused_ordering(447) 00:09:32.886 fused_ordering(448) 00:09:32.886 fused_ordering(449) 00:09:32.886 fused_ordering(450) 00:09:32.886 fused_ordering(451) 00:09:32.886 fused_ordering(452) 00:09:32.886 fused_ordering(453) 00:09:32.886 fused_ordering(454) 00:09:32.886 fused_ordering(455) 00:09:32.886 fused_ordering(456) 00:09:32.886 fused_ordering(457) 00:09:32.886 fused_ordering(458) 00:09:32.886 fused_ordering(459) 00:09:32.886 fused_ordering(460) 00:09:32.886 fused_ordering(461) 00:09:32.886 fused_ordering(462) 00:09:32.886 fused_ordering(463) 00:09:32.886 fused_ordering(464) 00:09:32.886 fused_ordering(465) 00:09:32.886 fused_ordering(466) 00:09:32.886 fused_ordering(467) 00:09:32.886 fused_ordering(468) 00:09:32.886 fused_ordering(469) 00:09:32.886 fused_ordering(470) 00:09:32.886 fused_ordering(471) 00:09:32.886 fused_ordering(472) 00:09:32.886 fused_ordering(473) 00:09:32.886 fused_ordering(474) 00:09:32.886 fused_ordering(475) 00:09:32.886 fused_ordering(476) 00:09:32.886 fused_ordering(477) 00:09:32.886 fused_ordering(478) 00:09:32.886 fused_ordering(479) 00:09:32.886 fused_ordering(480) 00:09:32.886 fused_ordering(481) 00:09:32.886 fused_ordering(482) 00:09:32.886 fused_ordering(483) 00:09:32.886 fused_ordering(484) 00:09:32.886 fused_ordering(485) 00:09:32.886 fused_ordering(486) 00:09:32.886 fused_ordering(487) 00:09:32.886 fused_ordering(488) 00:09:32.886 fused_ordering(489) 00:09:32.886 fused_ordering(490) 00:09:32.886 fused_ordering(491) 00:09:32.886 fused_ordering(492) 00:09:32.886 fused_ordering(493) 00:09:32.886 fused_ordering(494) 00:09:32.886 fused_ordering(495) 00:09:32.886 fused_ordering(496) 00:09:32.886 fused_ordering(497) 00:09:32.886 fused_ordering(498) 00:09:32.886 fused_ordering(499) 00:09:32.886 fused_ordering(500) 00:09:32.886 fused_ordering(501) 00:09:32.886 fused_ordering(502) 00:09:32.886 fused_ordering(503) 00:09:32.886 fused_ordering(504) 00:09:32.886 fused_ordering(505) 00:09:32.886 fused_ordering(506) 00:09:32.886 fused_ordering(507) 00:09:32.886 fused_ordering(508) 00:09:32.886 fused_ordering(509) 00:09:32.886 fused_ordering(510) 00:09:32.886 fused_ordering(511) 00:09:32.886 fused_ordering(512) 00:09:32.886 fused_ordering(513) 00:09:32.886 fused_ordering(514) 00:09:32.886 fused_ordering(515) 00:09:32.886 fused_ordering(516) 00:09:32.886 fused_ordering(517) 00:09:32.886 fused_ordering(518) 00:09:32.886 fused_ordering(519) 00:09:32.886 fused_ordering(520) 00:09:32.886 fused_ordering(521) 00:09:32.886 fused_ordering(522) 00:09:32.886 fused_ordering(523) 00:09:32.886 fused_ordering(524) 00:09:32.886 fused_ordering(525) 00:09:32.886 fused_ordering(526) 00:09:32.886 fused_ordering(527) 00:09:32.886 fused_ordering(528) 00:09:32.886 fused_ordering(529) 00:09:32.886 fused_ordering(530) 00:09:32.886 fused_ordering(531) 00:09:32.886 fused_ordering(532) 00:09:32.886 fused_ordering(533) 00:09:32.886 fused_ordering(534) 00:09:32.886 fused_ordering(535) 00:09:32.886 fused_ordering(536) 00:09:32.886 fused_ordering(537) 00:09:32.887 fused_ordering(538) 00:09:32.887 fused_ordering(539) 00:09:32.887 fused_ordering(540) 00:09:32.887 fused_ordering(541) 00:09:32.887 fused_ordering(542) 00:09:32.887 fused_ordering(543) 00:09:32.887 fused_ordering(544) 00:09:32.887 fused_ordering(545) 00:09:32.887 fused_ordering(546) 00:09:32.887 fused_ordering(547) 00:09:32.887 fused_ordering(548) 00:09:32.887 fused_ordering(549) 00:09:32.887 fused_ordering(550) 00:09:32.887 fused_ordering(551) 00:09:32.887 fused_ordering(552) 00:09:32.887 fused_ordering(553) 00:09:32.887 fused_ordering(554) 00:09:32.887 fused_ordering(555) 00:09:32.887 fused_ordering(556) 00:09:32.887 fused_ordering(557) 00:09:32.887 fused_ordering(558) 00:09:32.887 fused_ordering(559) 00:09:32.887 fused_ordering(560) 00:09:32.887 fused_ordering(561) 00:09:32.887 fused_ordering(562) 00:09:32.887 fused_ordering(563) 00:09:32.887 fused_ordering(564) 00:09:32.887 fused_ordering(565) 00:09:32.887 fused_ordering(566) 00:09:32.887 fused_ordering(567) 00:09:32.887 fused_ordering(568) 00:09:32.887 fused_ordering(569) 00:09:32.887 fused_ordering(570) 00:09:32.887 fused_ordering(571) 00:09:32.887 fused_ordering(572) 00:09:32.887 fused_ordering(573) 00:09:32.887 fused_ordering(574) 00:09:32.887 fused_ordering(575) 00:09:32.887 fused_ordering(576) 00:09:32.887 fused_ordering(577) 00:09:32.887 fused_ordering(578) 00:09:32.887 fused_ordering(579) 00:09:32.887 fused_ordering(580) 00:09:32.887 fused_ordering(581) 00:09:32.887 fused_ordering(582) 00:09:32.887 fused_ordering(583) 00:09:32.887 fused_ordering(584) 00:09:32.887 fused_ordering(585) 00:09:32.887 fused_ordering(586) 00:09:32.887 fused_ordering(587) 00:09:32.887 fused_ordering(588) 00:09:32.887 fused_ordering(589) 00:09:32.887 fused_ordering(590) 00:09:32.887 fused_ordering(591) 00:09:32.887 fused_ordering(592) 00:09:32.887 fused_ordering(593) 00:09:32.887 fused_ordering(594) 00:09:32.887 fused_ordering(595) 00:09:32.887 fused_ordering(596) 00:09:32.887 fused_ordering(597) 00:09:32.887 fused_ordering(598) 00:09:32.887 fused_ordering(599) 00:09:32.887 fused_ordering(600) 00:09:32.887 fused_ordering(601) 00:09:32.887 fused_ordering(602) 00:09:32.887 fused_ordering(603) 00:09:32.887 fused_ordering(604) 00:09:32.887 fused_ordering(605) 00:09:32.887 fused_ordering(606) 00:09:32.887 fused_ordering(607) 00:09:32.887 fused_ordering(608) 00:09:32.887 fused_ordering(609) 00:09:32.887 fused_ordering(610) 00:09:32.887 fused_ordering(611) 00:09:32.887 fused_ordering(612) 00:09:32.887 fused_ordering(613) 00:09:32.887 fused_ordering(614) 00:09:32.887 fused_ordering(615) 00:09:33.452 fused_ordering(616) 00:09:33.452 fused_ordering(617) 00:09:33.452 fused_ordering(618) 00:09:33.452 fused_ordering(619) 00:09:33.452 fused_ordering(620) 00:09:33.452 fused_ordering(621) 00:09:33.452 fused_ordering(622) 00:09:33.452 fused_ordering(623) 00:09:33.452 fused_ordering(624) 00:09:33.452 fused_ordering(625) 00:09:33.452 fused_ordering(626) 00:09:33.452 fused_ordering(627) 00:09:33.452 fused_ordering(628) 00:09:33.452 fused_ordering(629) 00:09:33.452 fused_ordering(630) 00:09:33.452 fused_ordering(631) 00:09:33.452 fused_ordering(632) 00:09:33.452 fused_ordering(633) 00:09:33.452 fused_ordering(634) 00:09:33.452 fused_ordering(635) 00:09:33.452 fused_ordering(636) 00:09:33.452 fused_ordering(637) 00:09:33.452 fused_ordering(638) 00:09:33.452 fused_ordering(639) 00:09:33.452 fused_ordering(640) 00:09:33.452 fused_ordering(641) 00:09:33.452 fused_ordering(642) 00:09:33.452 fused_ordering(643) 00:09:33.452 fused_ordering(644) 00:09:33.452 fused_ordering(645) 00:09:33.452 fused_ordering(646) 00:09:33.452 fused_ordering(647) 00:09:33.452 fused_ordering(648) 00:09:33.452 fused_ordering(649) 00:09:33.452 fused_ordering(650) 00:09:33.452 fused_ordering(651) 00:09:33.452 fused_ordering(652) 00:09:33.452 fused_ordering(653) 00:09:33.452 fused_ordering(654) 00:09:33.452 fused_ordering(655) 00:09:33.452 fused_ordering(656) 00:09:33.452 fused_ordering(657) 00:09:33.452 fused_ordering(658) 00:09:33.452 fused_ordering(659) 00:09:33.452 fused_ordering(660) 00:09:33.452 fused_ordering(661) 00:09:33.452 fused_ordering(662) 00:09:33.452 fused_ordering(663) 00:09:33.452 fused_ordering(664) 00:09:33.452 fused_ordering(665) 00:09:33.452 fused_ordering(666) 00:09:33.452 fused_ordering(667) 00:09:33.452 fused_ordering(668) 00:09:33.452 fused_ordering(669) 00:09:33.452 fused_ordering(670) 00:09:33.452 fused_ordering(671) 00:09:33.452 fused_ordering(672) 00:09:33.452 fused_ordering(673) 00:09:33.452 fused_ordering(674) 00:09:33.452 fused_ordering(675) 00:09:33.452 fused_ordering(676) 00:09:33.452 fused_ordering(677) 00:09:33.452 fused_ordering(678) 00:09:33.452 fused_ordering(679) 00:09:33.452 fused_ordering(680) 00:09:33.452 fused_ordering(681) 00:09:33.452 fused_ordering(682) 00:09:33.452 fused_ordering(683) 00:09:33.452 fused_ordering(684) 00:09:33.452 fused_ordering(685) 00:09:33.452 fused_ordering(686) 00:09:33.452 fused_ordering(687) 00:09:33.452 fused_ordering(688) 00:09:33.452 fused_ordering(689) 00:09:33.452 fused_ordering(690) 00:09:33.452 fused_ordering(691) 00:09:33.452 fused_ordering(692) 00:09:33.452 fused_ordering(693) 00:09:33.452 fused_ordering(694) 00:09:33.452 fused_ordering(695) 00:09:33.452 fused_ordering(696) 00:09:33.452 fused_ordering(697) 00:09:33.452 fused_ordering(698) 00:09:33.452 fused_ordering(699) 00:09:33.452 fused_ordering(700) 00:09:33.452 fused_ordering(701) 00:09:33.452 fused_ordering(702) 00:09:33.452 fused_ordering(703) 00:09:33.452 fused_ordering(704) 00:09:33.452 fused_ordering(705) 00:09:33.452 fused_ordering(706) 00:09:33.452 fused_ordering(707) 00:09:33.452 fused_ordering(708) 00:09:33.452 fused_ordering(709) 00:09:33.452 fused_ordering(710) 00:09:33.452 fused_ordering(711) 00:09:33.452 fused_ordering(712) 00:09:33.452 fused_ordering(713) 00:09:33.452 fused_ordering(714) 00:09:33.452 fused_ordering(715) 00:09:33.452 fused_ordering(716) 00:09:33.452 fused_ordering(717) 00:09:33.452 fused_ordering(718) 00:09:33.452 fused_ordering(719) 00:09:33.452 fused_ordering(720) 00:09:33.452 fused_ordering(721) 00:09:33.452 fused_ordering(722) 00:09:33.452 fused_ordering(723) 00:09:33.452 fused_ordering(724) 00:09:33.452 fused_ordering(725) 00:09:33.452 fused_ordering(726) 00:09:33.452 fused_ordering(727) 00:09:33.452 fused_ordering(728) 00:09:33.452 fused_ordering(729) 00:09:33.452 fused_ordering(730) 00:09:33.452 fused_ordering(731) 00:09:33.452 fused_ordering(732) 00:09:33.452 fused_ordering(733) 00:09:33.452 fused_ordering(734) 00:09:33.452 fused_ordering(735) 00:09:33.452 fused_ordering(736) 00:09:33.452 fused_ordering(737) 00:09:33.452 fused_ordering(738) 00:09:33.452 fused_ordering(739) 00:09:33.452 fused_ordering(740) 00:09:33.452 fused_ordering(741) 00:09:33.452 fused_ordering(742) 00:09:33.452 fused_ordering(743) 00:09:33.452 fused_ordering(744) 00:09:33.452 fused_ordering(745) 00:09:33.452 fused_ordering(746) 00:09:33.452 fused_ordering(747) 00:09:33.452 fused_ordering(748) 00:09:33.452 fused_ordering(749) 00:09:33.452 fused_ordering(750) 00:09:33.452 fused_ordering(751) 00:09:33.452 fused_ordering(752) 00:09:33.452 fused_ordering(753) 00:09:33.452 fused_ordering(754) 00:09:33.452 fused_ordering(755) 00:09:33.452 fused_ordering(756) 00:09:33.452 fused_ordering(757) 00:09:33.452 fused_ordering(758) 00:09:33.452 fused_ordering(759) 00:09:33.452 fused_ordering(760) 00:09:33.452 fused_ordering(761) 00:09:33.452 fused_ordering(762) 00:09:33.452 fused_ordering(763) 00:09:33.452 fused_ordering(764) 00:09:33.453 fused_ordering(765) 00:09:33.453 fused_ordering(766) 00:09:33.453 fused_ordering(767) 00:09:33.453 fused_ordering(768) 00:09:33.453 fused_ordering(769) 00:09:33.453 fused_ordering(770) 00:09:33.453 fused_ordering(771) 00:09:33.453 fused_ordering(772) 00:09:33.453 fused_ordering(773) 00:09:33.453 fused_ordering(774) 00:09:33.453 fused_ordering(775) 00:09:33.453 fused_ordering(776) 00:09:33.453 fused_ordering(777) 00:09:33.453 fused_ordering(778) 00:09:33.453 fused_ordering(779) 00:09:33.453 fused_ordering(780) 00:09:33.453 fused_ordering(781) 00:09:33.453 fused_ordering(782) 00:09:33.453 fused_ordering(783) 00:09:33.453 fused_ordering(784) 00:09:33.453 fused_ordering(785) 00:09:33.453 fused_ordering(786) 00:09:33.453 fused_ordering(787) 00:09:33.453 fused_ordering(788) 00:09:33.453 fused_ordering(789) 00:09:33.453 fused_ordering(790) 00:09:33.453 fused_ordering(791) 00:09:33.453 fused_ordering(792) 00:09:33.453 fused_ordering(793) 00:09:33.453 fused_ordering(794) 00:09:33.453 fused_ordering(795) 00:09:33.453 fused_ordering(796) 00:09:33.453 fused_ordering(797) 00:09:33.453 fused_ordering(798) 00:09:33.453 fused_ordering(799) 00:09:33.453 fused_ordering(800) 00:09:33.453 fused_ordering(801) 00:09:33.453 fused_ordering(802) 00:09:33.453 fused_ordering(803) 00:09:33.453 fused_ordering(804) 00:09:33.453 fused_ordering(805) 00:09:33.453 fused_ordering(806) 00:09:33.453 fused_ordering(807) 00:09:33.453 fused_ordering(808) 00:09:33.453 fused_ordering(809) 00:09:33.453 fused_ordering(810) 00:09:33.453 fused_ordering(811) 00:09:33.453 fused_ordering(812) 00:09:33.453 fused_ordering(813) 00:09:33.453 fused_ordering(814) 00:09:33.453 fused_ordering(815) 00:09:33.453 fused_ordering(816) 00:09:33.453 fused_ordering(817) 00:09:33.453 fused_ordering(818) 00:09:33.453 fused_ordering(819) 00:09:33.453 fused_ordering(820) 00:09:34.031 fused_ordering(821) 00:09:34.031 fused_ordering(822) 00:09:34.031 fused_ordering(823) 00:09:34.031 fused_ordering(824) 00:09:34.031 fused_ordering(825) 00:09:34.031 fused_ordering(826) 00:09:34.031 fused_ordering(827) 00:09:34.031 fused_ordering(828) 00:09:34.031 fused_ordering(829) 00:09:34.031 fused_ordering(830) 00:09:34.031 fused_ordering(831) 00:09:34.031 fused_ordering(832) 00:09:34.031 fused_ordering(833) 00:09:34.031 fused_ordering(834) 00:09:34.031 fused_ordering(835) 00:09:34.031 fused_ordering(836) 00:09:34.031 fused_ordering(837) 00:09:34.031 fused_ordering(838) 00:09:34.031 fused_ordering(839) 00:09:34.031 fused_ordering(840) 00:09:34.031 fused_ordering(841) 00:09:34.031 fused_ordering(842) 00:09:34.031 fused_ordering(843) 00:09:34.031 fused_ordering(844) 00:09:34.031 fused_ordering(845) 00:09:34.031 fused_ordering(846) 00:09:34.031 fused_ordering(847) 00:09:34.031 fused_ordering(848) 00:09:34.031 fused_ordering(849) 00:09:34.031 fused_ordering(850) 00:09:34.031 fused_ordering(851) 00:09:34.031 fused_ordering(852) 00:09:34.031 fused_ordering(853) 00:09:34.031 fused_ordering(854) 00:09:34.031 fused_ordering(855) 00:09:34.031 fused_ordering(856) 00:09:34.031 fused_ordering(857) 00:09:34.031 fused_ordering(858) 00:09:34.031 fused_ordering(859) 00:09:34.031 fused_ordering(860) 00:09:34.031 fused_ordering(861) 00:09:34.031 fused_ordering(862) 00:09:34.031 fused_ordering(863) 00:09:34.031 fused_ordering(864) 00:09:34.031 fused_ordering(865) 00:09:34.031 fused_ordering(866) 00:09:34.031 fused_ordering(867) 00:09:34.031 fused_ordering(868) 00:09:34.031 fused_ordering(869) 00:09:34.031 fused_ordering(870) 00:09:34.031 fused_ordering(871) 00:09:34.031 fused_ordering(872) 00:09:34.031 fused_ordering(873) 00:09:34.031 fused_ordering(874) 00:09:34.031 fused_ordering(875) 00:09:34.031 fused_ordering(876) 00:09:34.031 fused_ordering(877) 00:09:34.031 fused_ordering(878) 00:09:34.031 fused_ordering(879) 00:09:34.031 fused_ordering(880) 00:09:34.031 fused_ordering(881) 00:09:34.031 fused_ordering(882) 00:09:34.031 fused_ordering(883) 00:09:34.031 fused_ordering(884) 00:09:34.031 fused_ordering(885) 00:09:34.031 fused_ordering(886) 00:09:34.031 fused_ordering(887) 00:09:34.031 fused_ordering(888) 00:09:34.031 fused_ordering(889) 00:09:34.031 fused_ordering(890) 00:09:34.031 fused_ordering(891) 00:09:34.031 fused_ordering(892) 00:09:34.031 fused_ordering(893) 00:09:34.031 fused_ordering(894) 00:09:34.031 fused_ordering(895) 00:09:34.031 fused_ordering(896) 00:09:34.031 fused_ordering(897) 00:09:34.031 fused_ordering(898) 00:09:34.031 fused_ordering(899) 00:09:34.032 fused_ordering(900) 00:09:34.032 fused_ordering(901) 00:09:34.032 fused_ordering(902) 00:09:34.032 fused_ordering(903) 00:09:34.032 fused_ordering(904) 00:09:34.032 fused_ordering(905) 00:09:34.032 fused_ordering(906) 00:09:34.032 fused_ordering(907) 00:09:34.032 fused_ordering(908) 00:09:34.032 fused_ordering(909) 00:09:34.032 fused_ordering(910) 00:09:34.032 fused_ordering(911) 00:09:34.032 fused_ordering(912) 00:09:34.032 fused_ordering(913) 00:09:34.032 fused_ordering(914) 00:09:34.032 fused_ordering(915) 00:09:34.032 fused_ordering(916) 00:09:34.032 fused_ordering(917) 00:09:34.032 fused_ordering(918) 00:09:34.032 fused_ordering(919) 00:09:34.032 fused_ordering(920) 00:09:34.032 fused_ordering(921) 00:09:34.032 fused_ordering(922) 00:09:34.032 fused_ordering(923) 00:09:34.032 fused_ordering(924) 00:09:34.032 fused_ordering(925) 00:09:34.032 fused_ordering(926) 00:09:34.032 fused_ordering(927) 00:09:34.032 fused_ordering(928) 00:09:34.032 fused_ordering(929) 00:09:34.032 fused_ordering(930) 00:09:34.032 fused_ordering(931) 00:09:34.032 fused_ordering(932) 00:09:34.032 fused_ordering(933) 00:09:34.032 fused_ordering(934) 00:09:34.032 fused_ordering(935) 00:09:34.032 fused_ordering(936) 00:09:34.032 fused_ordering(937) 00:09:34.032 fused_ordering(938) 00:09:34.032 fused_ordering(939) 00:09:34.032 fused_ordering(940) 00:09:34.032 fused_ordering(941) 00:09:34.032 fused_ordering(942) 00:09:34.032 fused_ordering(943) 00:09:34.032 fused_ordering(944) 00:09:34.032 fused_ordering(945) 00:09:34.032 fused_ordering(946) 00:09:34.032 fused_ordering(947) 00:09:34.032 fused_ordering(948) 00:09:34.032 fused_ordering(949) 00:09:34.032 fused_ordering(950) 00:09:34.032 fused_ordering(951) 00:09:34.032 fused_ordering(952) 00:09:34.032 fused_ordering(953) 00:09:34.032 fused_ordering(954) 00:09:34.032 fused_ordering(955) 00:09:34.032 fused_ordering(956) 00:09:34.032 fused_ordering(957) 00:09:34.032 fused_ordering(958) 00:09:34.032 fused_ordering(959) 00:09:34.032 fused_ordering(960) 00:09:34.032 fused_ordering(961) 00:09:34.032 fused_ordering(962) 00:09:34.032 fused_ordering(963) 00:09:34.032 fused_ordering(964) 00:09:34.032 fused_ordering(965) 00:09:34.032 fused_ordering(966) 00:09:34.032 fused_ordering(967) 00:09:34.032 fused_ordering(968) 00:09:34.032 fused_ordering(969) 00:09:34.032 fused_ordering(970) 00:09:34.032 fused_ordering(971) 00:09:34.032 fused_ordering(972) 00:09:34.032 fused_ordering(973) 00:09:34.032 fused_ordering(974) 00:09:34.032 fused_ordering(975) 00:09:34.032 fused_ordering(976) 00:09:34.032 fused_ordering(977) 00:09:34.032 fused_ordering(978) 00:09:34.032 fused_ordering(979) 00:09:34.032 fused_ordering(980) 00:09:34.032 fused_ordering(981) 00:09:34.032 fused_ordering(982) 00:09:34.032 fused_ordering(983) 00:09:34.032 fused_ordering(984) 00:09:34.032 fused_ordering(985) 00:09:34.032 fused_ordering(986) 00:09:34.032 fused_ordering(987) 00:09:34.032 fused_ordering(988) 00:09:34.032 fused_ordering(989) 00:09:34.032 fused_ordering(990) 00:09:34.032 fused_ordering(991) 00:09:34.032 fused_ordering(992) 00:09:34.032 fused_ordering(993) 00:09:34.032 fused_ordering(994) 00:09:34.032 fused_ordering(995) 00:09:34.032 fused_ordering(996) 00:09:34.032 fused_ordering(997) 00:09:34.032 fused_ordering(998) 00:09:34.032 fused_ordering(999) 00:09:34.032 fused_ordering(1000) 00:09:34.032 fused_ordering(1001) 00:09:34.032 fused_ordering(1002) 00:09:34.032 fused_ordering(1003) 00:09:34.032 fused_ordering(1004) 00:09:34.032 fused_ordering(1005) 00:09:34.032 fused_ordering(1006) 00:09:34.032 fused_ordering(1007) 00:09:34.032 fused_ordering(1008) 00:09:34.032 fused_ordering(1009) 00:09:34.032 fused_ordering(1010) 00:09:34.032 fused_ordering(1011) 00:09:34.032 fused_ordering(1012) 00:09:34.032 fused_ordering(1013) 00:09:34.032 fused_ordering(1014) 00:09:34.032 fused_ordering(1015) 00:09:34.032 fused_ordering(1016) 00:09:34.032 fused_ordering(1017) 00:09:34.032 fused_ordering(1018) 00:09:34.032 fused_ordering(1019) 00:09:34.032 fused_ordering(1020) 00:09:34.032 fused_ordering(1021) 00:09:34.032 fused_ordering(1022) 00:09:34.032 fused_ordering(1023) 00:09:34.289 21:31:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:34.289 21:31:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:34.289 21:31:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:34.289 21:31:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:09:34.289 21:31:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:34.289 21:31:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:09:34.289 21:31:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:34.289 21:31:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:34.289 rmmod nvme_tcp 00:09:34.289 rmmod nvme_fabrics 00:09:34.289 rmmod nvme_keyring 00:09:34.289 21:31:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:34.289 21:31:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:09:34.289 21:31:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:09:34.289 21:31:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 301375 ']' 00:09:34.289 21:31:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 301375 00:09:34.289 21:31:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 301375 ']' 00:09:34.289 21:31:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 301375 00:09:34.289 21:31:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:09:34.289 21:31:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:34.289 21:31:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 301375 00:09:34.289 21:31:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:34.289 21:31:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:34.289 21:31:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 301375' 00:09:34.289 killing process with pid 301375 00:09:34.289 21:31:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 301375 00:09:34.289 21:31:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 301375 00:09:34.548 21:31:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:34.548 21:31:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:34.548 21:31:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:34.548 21:31:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:34.548 21:31:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:34.548 21:31:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.548 21:31:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:34.548 21:31:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.454 21:31:27 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:36.454 00:09:36.454 real 0m6.967s 00:09:36.454 user 0m5.143s 00:09:36.454 sys 0m2.486s 00:09:36.454 21:31:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:36.454 21:31:27 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:36.454 ************************************ 00:09:36.454 END TEST nvmf_fused_ordering 00:09:36.454 ************************************ 00:09:36.454 21:31:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:36.454 21:31:27 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:36.454 21:31:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:36.454 21:31:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.454 21:31:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:36.454 ************************************ 00:09:36.454 START TEST nvmf_delete_subsystem 00:09:36.454 ************************************ 00:09:36.454 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:36.712 * Looking for test storage... 00:09:36.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:36.713 21:31:27 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:38.700 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:38.700 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:38.700 Found net devices under 0000:08:00.0: cvl_0_0 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:38.700 Found net devices under 0000:08:00.1: cvl_0_1 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:38.700 21:31:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:38.700 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:38.700 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:38.700 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:38.700 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:38.700 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:38.700 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:38.700 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:38.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:09:38.700 00:09:38.700 --- 10.0.0.2 ping statistics --- 00:09:38.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.700 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:09:38.700 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:38.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:09:38.700 00:09:38.700 --- 10.0.0.1 ping statistics --- 00:09:38.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.700 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:09:38.700 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.700 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=303102 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 303102 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 303102 ']' 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.701 [2024-07-15 21:31:29.161815] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:09:38.701 [2024-07-15 21:31:29.161903] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.701 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.701 [2024-07-15 21:31:29.225946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:38.701 [2024-07-15 21:31:29.342007] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.701 [2024-07-15 21:31:29.342066] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.701 [2024-07-15 21:31:29.342082] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.701 [2024-07-15 21:31:29.342095] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.701 [2024-07-15 21:31:29.342107] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.701 [2024-07-15 21:31:29.342211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.701 [2024-07-15 21:31:29.342283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.701 [2024-07-15 21:31:29.473856] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.701 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:38.984 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.984 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.984 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.984 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:38.984 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.984 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.984 [2024-07-15 21:31:29.490020] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:38.984 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.984 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:38.984 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.984 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.984 NULL1 00:09:38.984 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.984 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:38.984 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.984 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.984 Delay0 00:09:38.984 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.984 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.984 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.984 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.984 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.984 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=303207 00:09:38.984 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:38.984 21:31:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:38.984 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.984 [2024-07-15 21:31:29.574771] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:40.886 21:31:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:40.886 21:31:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.886 21:31:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:41.143 Write completed with error (sct=0, sc=8) 00:09:41.143 Write completed with error (sct=0, sc=8) 00:09:41.143 Write completed with error (sct=0, sc=8) 00:09:41.143 Read completed with error (sct=0, sc=8) 00:09:41.143 starting I/O failed: -6 00:09:41.143 Write completed with error (sct=0, sc=8) 00:09:41.143 Read completed with error (sct=0, sc=8) 00:09:41.143 Write completed with error (sct=0, sc=8) 00:09:41.143 Read completed with error (sct=0, sc=8) 00:09:41.143 starting I/O failed: -6 00:09:41.143 Write completed with error (sct=0, sc=8) 00:09:41.143 Read completed with error (sct=0, sc=8) 00:09:41.143 Read completed with error (sct=0, sc=8) 00:09:41.143 Read completed with error (sct=0, sc=8) 00:09:41.143 starting I/O failed: -6 00:09:41.143 Read completed with error (sct=0, sc=8) 00:09:41.143 Read completed with error (sct=0, sc=8) 00:09:41.143 Write completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 starting I/O failed: -6 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 starting I/O failed: -6 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 starting I/O failed: -6 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 starting I/O failed: -6 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 starting I/O failed: -6 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 starting I/O failed: -6 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 starting I/O failed: -6 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 starting I/O failed: -6 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 starting I/O failed: -6 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 [2024-07-15 21:31:31.824584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167efa0 is same with the state(5) to be set 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 starting I/O failed: -6 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 starting I/O failed: -6 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 starting I/O failed: -6 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 starting I/O failed: -6 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 starting I/O failed: -6 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 starting I/O failed: -6 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 starting I/O failed: -6 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 starting I/O failed: -6 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 starting I/O failed: -6 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 [2024-07-15 21:31:31.826669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fca0c000c00 is same with the state(5) to be set 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Read completed with error (sct=0, sc=8) 00:09:41.144 Write completed with error (sct=0, sc=8) 00:09:42.078 [2024-07-15 21:31:32.792732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165e600 is same with the state(5) to be set 00:09:42.078 Read completed with error (sct=0, sc=8) 00:09:42.078 Write completed with error (sct=0, sc=8) 00:09:42.078 Write completed with error (sct=0, sc=8) 00:09:42.078 Read completed with error (sct=0, sc=8) 00:09:42.078 Read completed with error (sct=0, sc=8) 00:09:42.078 Write completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 [2024-07-15 21:31:32.829361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167edc0 is same with the state(5) to be set 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 [2024-07-15 21:31:32.829678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f2b0 is same with the state(5) to be set 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 [2024-07-15 21:31:32.829920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1681130 is same with the state(5) to be set 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 Write completed with error (sct=0, sc=8) 00:09:42.079 Read completed with error (sct=0, sc=8) 00:09:42.079 [2024-07-15 21:31:32.830060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fca0c00d2f0 is same with the state(5) to be set 00:09:42.079 Initializing NVMe Controllers 00:09:42.079 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:42.079 Controller IO queue size 128, less than required. 00:09:42.079 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:42.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:42.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:42.079 Initialization complete. Launching workers. 00:09:42.079 ======================================================== 00:09:42.079 Latency(us) 00:09:42.079 Device Information : IOPS MiB/s Average min max 00:09:42.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 181.30 0.09 957556.15 1290.96 1044559.69 00:09:42.079 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 148.02 0.07 909360.92 324.27 1013123.84 00:09:42.079 ======================================================== 00:09:42.079 Total : 329.31 0.16 935893.74 324.27 1044559.69 00:09:42.079 00:09:42.079 [2024-07-15 21:31:32.830956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165e600 (9): Bad file descriptor 00:09:42.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:42.079 21:31:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.079 21:31:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:42.079 21:31:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 303207 00:09:42.079 21:31:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 303207 00:09:42.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (303207) - No such process 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 303207 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 303207 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 303207 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.647 [2024-07-15 21:31:33.352543] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=303526 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 303526 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:42.647 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:42.647 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.647 [2024-07-15 21:31:33.427121] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:43.211 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:43.211 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 303526 00:09:43.211 21:31:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:43.777 21:31:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:43.777 21:31:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 303526 00:09:43.777 21:31:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:44.342 21:31:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:44.342 21:31:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 303526 00:09:44.342 21:31:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:44.600 21:31:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:44.600 21:31:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 303526 00:09:44.600 21:31:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:45.167 21:31:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:45.167 21:31:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 303526 00:09:45.167 21:31:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:45.734 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:45.734 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 303526 00:09:45.734 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:45.994 Initializing NVMe Controllers 00:09:45.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:45.994 Controller IO queue size 128, less than required. 00:09:45.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:45.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:45.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:45.994 Initialization complete. Launching workers. 00:09:45.994 ======================================================== 00:09:45.994 Latency(us) 00:09:45.994 Device Information : IOPS MiB/s Average min max 00:09:45.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004014.25 1000211.92 1042343.73 00:09:45.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004886.46 1000183.80 1042330.72 00:09:45.994 ======================================================== 00:09:45.994 Total : 256.00 0.12 1004450.35 1000183.80 1042343.73 00:09:45.994 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 303526 00:09:46.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (303526) - No such process 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 303526 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:46.254 rmmod nvme_tcp 00:09:46.254 rmmod nvme_fabrics 00:09:46.254 rmmod nvme_keyring 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 303102 ']' 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 303102 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 303102 ']' 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 303102 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 303102 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 303102' 00:09:46.254 killing process with pid 303102 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 303102 00:09:46.254 21:31:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 303102 00:09:46.514 21:31:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:46.514 21:31:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:46.514 21:31:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:46.514 21:31:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:46.514 21:31:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:46.514 21:31:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.514 21:31:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:46.514 21:31:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.422 21:31:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:48.422 00:09:48.422 real 0m11.964s 00:09:48.422 user 0m27.838s 00:09:48.422 sys 0m2.743s 00:09:48.422 21:31:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:48.422 21:31:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:48.422 ************************************ 00:09:48.422 END TEST nvmf_delete_subsystem 00:09:48.422 ************************************ 00:09:48.422 21:31:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:48.422 21:31:39 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:48.422 21:31:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:48.422 21:31:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.422 21:31:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:48.681 ************************************ 00:09:48.681 START TEST nvmf_ns_masking 00:09:48.681 ************************************ 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:48.681 * Looking for test storage... 00:09:48.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=c89ae767-91c7-4301-83a2-e77361219dfb 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=9c8c6676-a52e-43a0-a505-4ce1146baf0b 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=11c72a97-1ef2-4203-b0df-687f50c3c592 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:09:48.681 21:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:50.582 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:50.582 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:09:50.582 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:50.582 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:50.582 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:50.582 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:50.582 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:50.582 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:09:50.582 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:50.582 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:09:50.582 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:09:50.582 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:09:50.582 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:09:50.582 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:09:50.582 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:09:50.582 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:50.582 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:50.583 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:50.583 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:50.583 Found net devices under 0000:08:00.0: cvl_0_0 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:50.583 Found net devices under 0000:08:00.1: cvl_0_1 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:50.583 21:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:50.583 21:31:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:50.583 21:31:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:50.583 21:31:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:50.583 21:31:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:50.583 21:31:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:50.583 21:31:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:50.583 21:31:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:50.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:09:50.583 00:09:50.583 --- 10.0.0.2 ping statistics --- 00:09:50.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.583 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:09:50.583 21:31:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:50.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:09:50.583 00:09:50.583 --- 10.0.0.1 ping statistics --- 00:09:50.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.583 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:09:50.583 21:31:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.583 21:31:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:09:50.583 21:31:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:50.583 21:31:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.583 21:31:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:50.583 21:31:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:50.583 21:31:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.583 21:31:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:50.583 21:31:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:50.583 21:31:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:09:50.583 21:31:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:50.583 21:31:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:50.583 21:31:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:50.583 21:31:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=305340 00:09:50.584 21:31:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:09:50.584 21:31:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 305340 00:09:50.584 21:31:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 305340 ']' 00:09:50.584 21:31:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.584 21:31:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:50.584 21:31:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.584 21:31:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:50.584 21:31:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:50.584 [2024-07-15 21:31:41.186704] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:09:50.584 [2024-07-15 21:31:41.186812] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.584 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.584 [2024-07-15 21:31:41.253563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.584 [2024-07-15 21:31:41.371916] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.584 [2024-07-15 21:31:41.371976] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.584 [2024-07-15 21:31:41.371992] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.584 [2024-07-15 21:31:41.372006] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.584 [2024-07-15 21:31:41.372017] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.584 [2024-07-15 21:31:41.372047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.842 21:31:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:50.842 21:31:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:50.842 21:31:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:50.842 21:31:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:50.842 21:31:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:50.842 21:31:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.842 21:31:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:51.099 [2024-07-15 21:31:41.780570] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.099 21:31:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:09:51.099 21:31:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:09:51.099 21:31:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:51.356 Malloc1 00:09:51.356 21:31:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:51.613 Malloc2 00:09:51.871 21:31:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:52.130 21:31:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:52.389 21:31:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.647 [2024-07-15 21:31:43.226267] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.647 21:31:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:09:52.647 21:31:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 11c72a97-1ef2-4203-b0df-687f50c3c592 -a 10.0.0.2 -s 4420 -i 4 00:09:52.907 21:31:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:09:52.907 21:31:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:52.907 21:31:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:52.907 21:31:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:52.907 21:31:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:54.843 21:31:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:54.843 21:31:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:54.843 21:31:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:54.843 21:31:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:54.843 21:31:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:54.843 21:31:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:54.843 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:54.843 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:54.843 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:54.843 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:54.843 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:54.843 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:54.843 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:54.843 [ 0]:0x1 00:09:54.843 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:54.843 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:54.843 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0180bb288a84d42b6b378a3286b501d 00:09:54.843 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0180bb288a84d42b6b378a3286b501d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:54.843 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:55.412 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:55.412 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:55.412 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:55.413 [ 0]:0x1 00:09:55.413 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:55.413 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:55.413 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0180bb288a84d42b6b378a3286b501d 00:09:55.413 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0180bb288a84d42b6b378a3286b501d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:55.413 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:55.413 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:55.413 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:55.413 [ 1]:0x2 00:09:55.413 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:55.413 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:55.413 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8d6c6efd67014ec78343b10b8c87c05a 00:09:55.413 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8d6c6efd67014ec78343b10b8c87c05a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:55.413 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:55.413 21:31:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:55.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.672 21:31:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.930 21:31:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:56.190 21:31:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:56.190 21:31:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 11c72a97-1ef2-4203-b0df-687f50c3c592 -a 10.0.0.2 -s 4420 -i 4 00:09:56.190 21:31:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:56.190 21:31:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:56.190 21:31:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:56.190 21:31:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:09:56.190 21:31:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:09:56.190 21:31:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:58.731 21:31:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:58.731 21:31:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:58.731 21:31:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:58.731 21:31:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:58.732 21:31:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:58.732 21:31:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:58.732 21:31:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:58.732 21:31:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:58.732 21:31:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:58.732 21:31:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:58.732 21:31:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:58.732 21:31:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:58.732 21:31:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:58.732 21:31:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:58.732 21:31:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:58.732 21:31:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:58.732 21:31:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:58.732 21:31:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:58.732 21:31:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:58.732 21:31:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:58.732 21:31:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:58.732 21:31:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:58.732 [ 0]:0x2 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8d6c6efd67014ec78343b10b8c87c05a 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8d6c6efd67014ec78343b10b8c87c05a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:58.732 [ 0]:0x1 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0180bb288a84d42b6b378a3286b501d 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0180bb288a84d42b6b378a3286b501d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:58.732 [ 1]:0x2 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8d6c6efd67014ec78343b10b8c87c05a 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8d6c6efd67014ec78343b10b8c87c05a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:58.732 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:58.991 [ 0]:0x2 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8d6c6efd67014ec78343b10b8c87c05a 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8d6c6efd67014ec78343b10b8c87c05a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:09:58.991 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:59.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.251 21:31:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:59.511 21:31:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:09:59.511 21:31:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 11c72a97-1ef2-4203-b0df-687f50c3c592 -a 10.0.0.2 -s 4420 -i 4 00:09:59.511 21:31:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:59.511 21:31:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:59.511 21:31:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:59.511 21:31:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:59.511 21:31:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:59.511 21:31:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:01.417 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:01.417 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:01.417 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:01.417 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:10:01.417 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:01.417 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:01.417 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:01.417 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:01.675 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:01.675 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:01.675 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:10:01.675 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:01.675 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:01.675 [ 0]:0x1 00:10:01.675 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:01.675 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:01.675 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e0180bb288a84d42b6b378a3286b501d 00:10:01.675 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e0180bb288a84d42b6b378a3286b501d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:01.675 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:10:01.675 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:01.675 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:01.675 [ 1]:0x2 00:10:01.675 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:01.675 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:01.675 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8d6c6efd67014ec78343b10b8c87c05a 00:10:01.675 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8d6c6efd67014ec78343b10b8c87c05a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:01.675 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:01.934 [ 0]:0x2 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8d6c6efd67014ec78343b10b8c87c05a 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8d6c6efd67014ec78343b10b8c87c05a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:01.934 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:02.194 [2024-07-15 21:31:52.914881] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:10:02.194 request: 00:10:02.194 { 00:10:02.194 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:02.194 "nsid": 2, 00:10:02.194 "host": "nqn.2016-06.io.spdk:host1", 00:10:02.194 "method": "nvmf_ns_remove_host", 00:10:02.194 "req_id": 1 00:10:02.194 } 00:10:02.194 Got JSON-RPC error response 00:10:02.194 response: 00:10:02.194 { 00:10:02.194 "code": -32602, 00:10:02.194 "message": "Invalid parameters" 00:10:02.194 } 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:02.194 [ 0]:0x2 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:02.194 21:31:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:02.453 21:31:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8d6c6efd67014ec78343b10b8c87c05a 00:10:02.453 21:31:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8d6c6efd67014ec78343b10b8c87c05a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:02.453 21:31:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:10:02.453 21:31:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:02.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.453 21:31:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=306623 00:10:02.453 21:31:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:10:02.453 21:31:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.453 21:31:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 306623 /var/tmp/host.sock 00:10:02.453 21:31:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 306623 ']' 00:10:02.453 21:31:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:02.453 21:31:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:02.453 21:31:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:02.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:02.453 21:31:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:02.453 21:31:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:02.453 [2024-07-15 21:31:53.109375] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:10:02.453 [2024-07-15 21:31:53.109477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid306623 ] 00:10:02.453 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.453 [2024-07-15 21:31:53.169208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.711 [2024-07-15 21:31:53.276244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.711 21:31:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:02.711 21:31:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:10:02.711 21:31:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.277 21:31:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:03.277 21:31:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid c89ae767-91c7-4301-83a2-e77361219dfb 00:10:03.277 21:31:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:03.277 21:31:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C89AE76791C7430183A2E77361219DFB -i 00:10:03.844 21:31:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 9c8c6676-a52e-43a0-a505-4ce1146baf0b 00:10:03.844 21:31:54 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:03.844 21:31:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 9C8C6676A52E43A0A5054CE1146BAF0B -i 00:10:04.101 21:31:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:04.360 21:31:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:10:04.620 21:31:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:04.620 21:31:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:04.878 nvme0n1 00:10:04.878 21:31:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:04.878 21:31:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:05.444 nvme1n2 00:10:05.444 21:31:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:10:05.444 21:31:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:10:05.444 21:31:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:10:05.444 21:31:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:10:05.444 21:31:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:10:05.702 21:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:10:05.702 21:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:10:05.702 21:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:10:05.702 21:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:10:05.960 21:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ c89ae767-91c7-4301-83a2-e77361219dfb == \c\8\9\a\e\7\6\7\-\9\1\c\7\-\4\3\0\1\-\8\3\a\2\-\e\7\7\3\6\1\2\1\9\d\f\b ]] 00:10:05.960 21:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:10:05.960 21:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:10:05.960 21:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:10:06.218 21:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 9c8c6676-a52e-43a0-a505-4ce1146baf0b == \9\c\8\c\6\6\7\6\-\a\5\2\e\-\4\3\a\0\-\a\5\0\5\-\4\c\e\1\1\4\6\b\a\f\0\b ]] 00:10:06.218 21:31:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 306623 00:10:06.218 21:31:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 306623 ']' 00:10:06.218 21:31:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 306623 00:10:06.218 21:31:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:10:06.218 21:31:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:06.218 21:31:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 306623 00:10:06.218 21:31:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:06.218 21:31:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:06.218 21:31:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 306623' 00:10:06.218 killing process with pid 306623 00:10:06.218 21:31:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 306623 00:10:06.218 21:31:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 306623 00:10:06.476 21:31:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:06.735 21:31:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:10:06.735 21:31:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:10:06.735 21:31:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:06.735 21:31:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:10:06.735 21:31:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:06.735 21:31:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:10:06.735 21:31:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:06.735 21:31:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:06.735 rmmod nvme_tcp 00:10:06.735 rmmod nvme_fabrics 00:10:06.735 rmmod nvme_keyring 00:10:06.735 21:31:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:06.735 21:31:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:10:06.735 21:31:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:10:06.735 21:31:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 305340 ']' 00:10:06.735 21:31:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 305340 00:10:06.735 21:31:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 305340 ']' 00:10:06.735 21:31:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 305340 00:10:06.735 21:31:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:10:06.735 21:31:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:06.735 21:31:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 305340 00:10:06.735 21:31:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:06.735 21:31:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:06.735 21:31:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 305340' 00:10:06.735 killing process with pid 305340 00:10:06.735 21:31:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 305340 00:10:06.735 21:31:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 305340 00:10:06.994 21:31:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:06.994 21:31:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:06.994 21:31:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:06.994 21:31:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:06.994 21:31:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:06.994 21:31:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.994 21:31:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:06.994 21:31:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.526 21:31:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:09.526 00:10:09.526 real 0m20.524s 00:10:09.526 user 0m27.583s 00:10:09.526 sys 0m3.684s 00:10:09.526 21:31:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:09.526 21:31:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:09.526 ************************************ 00:10:09.526 END TEST nvmf_ns_masking 00:10:09.526 ************************************ 00:10:09.526 21:31:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:09.526 21:31:59 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:10:09.527 21:31:59 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:09.527 21:31:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:09.527 21:31:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:09.527 21:31:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:09.527 ************************************ 00:10:09.527 START TEST nvmf_nvme_cli 00:10:09.527 ************************************ 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:09.527 * Looking for test storage... 00:10:09.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:10:09.527 21:31:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:10:10.905 Found 0000:08:00.0 (0x8086 - 0x159b) 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:10:10.905 Found 0000:08:00.1 (0x8086 - 0x159b) 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:10:10.905 Found net devices under 0000:08:00.0: cvl_0_0 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:10:10.905 Found net devices under 0000:08:00.1: cvl_0_1 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:10.905 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:10.906 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:10.906 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:11.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:10:11.166 00:10:11.166 --- 10.0.0.2 ping statistics --- 00:10:11.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.166 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:11.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:10:11.166 00:10:11.166 --- 10.0.0.1 ping statistics --- 00:10:11.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.166 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=308560 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 308560 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 308560 ']' 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:11.166 21:32:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:11.166 [2024-07-15 21:32:01.799942] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:10:11.166 [2024-07-15 21:32:01.800053] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.166 EAL: No free 2048 kB hugepages reported on node 1 00:10:11.166 [2024-07-15 21:32:01.867685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:11.427 [2024-07-15 21:32:01.989118] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.427 [2024-07-15 21:32:01.989182] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.427 [2024-07-15 21:32:01.989199] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.427 [2024-07-15 21:32:01.989213] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.427 [2024-07-15 21:32:01.989226] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.427 [2024-07-15 21:32:01.991160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.427 [2024-07-15 21:32:01.991226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.427 [2024-07-15 21:32:01.991398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:11.427 [2024-07-15 21:32:01.991431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:11.427 [2024-07-15 21:32:02.138932] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:11.427 Malloc0 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:11.427 Malloc1 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.427 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:11.427 [2024-07-15 21:32:02.216344] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.685 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.685 21:32:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:11.685 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.685 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:11.685 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.685 21:32:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 4420 00:10:11.685 00:10:11.685 Discovery Log Number of Records 2, Generation counter 2 00:10:11.685 =====Discovery Log Entry 0====== 00:10:11.685 trtype: tcp 00:10:11.685 adrfam: ipv4 00:10:11.685 subtype: current discovery subsystem 00:10:11.685 treq: not required 00:10:11.685 portid: 0 00:10:11.685 trsvcid: 4420 00:10:11.685 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:11.685 traddr: 10.0.0.2 00:10:11.685 eflags: explicit discovery connections, duplicate discovery information 00:10:11.685 sectype: none 00:10:11.685 =====Discovery Log Entry 1====== 00:10:11.685 trtype: tcp 00:10:11.685 adrfam: ipv4 00:10:11.685 subtype: nvme subsystem 00:10:11.685 treq: not required 00:10:11.685 portid: 0 00:10:11.685 trsvcid: 4420 00:10:11.685 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:11.685 traddr: 10.0.0.2 00:10:11.685 eflags: none 00:10:11.685 sectype: none 00:10:11.685 21:32:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:10:11.685 21:32:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:10:11.685 21:32:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:11.685 21:32:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:11.685 21:32:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:11.685 21:32:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:11.685 21:32:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:11.685 21:32:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:11.685 21:32:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:11.685 21:32:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:10:11.685 21:32:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:12.251 21:32:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:12.251 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:10:12.251 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:12.251 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:10:12.251 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:10:12.251 21:32:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:10:14.146 /dev/nvme0n1 ]] 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:10:14.146 21:32:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:14.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.404 21:32:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:14.404 21:32:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:10:14.404 21:32:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:14.404 21:32:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.404 21:32:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:14.404 21:32:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.404 21:32:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:10:14.404 21:32:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:10:14.404 21:32:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:14.404 21:32:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.404 21:32:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:14.404 21:32:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.404 21:32:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:14.404 21:32:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:10:14.404 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:14.404 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:10:14.404 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:14.404 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:10:14.404 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:14.404 21:32:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:14.404 rmmod nvme_tcp 00:10:14.404 rmmod nvme_fabrics 00:10:14.404 rmmod nvme_keyring 00:10:14.404 21:32:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:14.404 21:32:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:10:14.404 21:32:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:10:14.404 21:32:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 308560 ']' 00:10:14.404 21:32:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 308560 00:10:14.404 21:32:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 308560 ']' 00:10:14.404 21:32:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 308560 00:10:14.404 21:32:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:10:14.404 21:32:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:14.404 21:32:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 308560 00:10:14.404 21:32:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:14.404 21:32:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:14.404 21:32:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 308560' 00:10:14.404 killing process with pid 308560 00:10:14.404 21:32:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 308560 00:10:14.404 21:32:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 308560 00:10:14.664 21:32:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:14.664 21:32:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:14.664 21:32:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:14.664 21:32:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:14.664 21:32:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:14.664 21:32:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.664 21:32:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:14.664 21:32:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.574 21:32:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:16.574 00:10:16.574 real 0m7.525s 00:10:16.574 user 0m13.500s 00:10:16.574 sys 0m2.006s 00:10:16.574 21:32:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:16.574 21:32:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:16.574 ************************************ 00:10:16.574 END TEST nvmf_nvme_cli 00:10:16.574 ************************************ 00:10:16.833 21:32:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:16.833 21:32:07 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:10:16.833 21:32:07 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:16.833 21:32:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:16.833 21:32:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.833 21:32:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:16.833 ************************************ 00:10:16.833 START TEST nvmf_vfio_user 00:10:16.833 ************************************ 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:16.833 * Looking for test storage... 00:10:16.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=309206 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 309206' 00:10:16.833 Process pid: 309206 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 309206 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 309206 ']' 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:16.833 21:32:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:16.833 [2024-07-15 21:32:07.527557] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:10:16.833 [2024-07-15 21:32:07.527666] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.833 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.833 [2024-07-15 21:32:07.595715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:17.092 [2024-07-15 21:32:07.715758] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:17.092 [2024-07-15 21:32:07.715816] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:17.092 [2024-07-15 21:32:07.715832] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.092 [2024-07-15 21:32:07.715845] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.092 [2024-07-15 21:32:07.715863] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:17.092 [2024-07-15 21:32:07.715944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.092 [2024-07-15 21:32:07.715971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.092 [2024-07-15 21:32:07.716020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:17.092 [2024-07-15 21:32:07.716023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.092 21:32:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:17.092 21:32:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:10:17.092 21:32:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:18.465 21:32:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:10:18.465 21:32:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:18.465 21:32:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:18.465 21:32:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:18.465 21:32:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:18.465 21:32:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:18.723 Malloc1 00:10:18.723 21:32:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:18.980 21:32:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:19.239 21:32:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:19.497 21:32:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:19.497 21:32:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:19.497 21:32:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:19.754 Malloc2 00:10:19.754 21:32:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:20.012 21:32:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:20.269 21:32:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:20.526 21:32:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:10:20.526 21:32:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:10:20.526 21:32:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:20.526 21:32:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:20.526 21:32:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:10:20.526 21:32:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:20.526 [2024-07-15 21:32:11.270496] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:10:20.526 [2024-07-15 21:32:11.270548] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid309531 ] 00:10:20.526 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.526 [2024-07-15 21:32:11.307075] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:10:20.526 [2024-07-15 21:32:11.314534] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:20.526 [2024-07-15 21:32:11.314561] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2b7d6eb000 00:10:20.526 [2024-07-15 21:32:11.315538] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:20.526 [2024-07-15 21:32:11.316535] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:20.526 [2024-07-15 21:32:11.317540] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:20.527 [2024-07-15 21:32:11.318546] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:20.786 [2024-07-15 21:32:11.319548] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:20.786 [2024-07-15 21:32:11.320554] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:20.786 [2024-07-15 21:32:11.321555] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:20.786 [2024-07-15 21:32:11.322556] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:20.786 [2024-07-15 21:32:11.323565] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:20.786 [2024-07-15 21:32:11.323585] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2b7d6e0000 00:10:20.786 [2024-07-15 21:32:11.324804] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:20.786 [2024-07-15 21:32:11.346480] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:10:20.786 [2024-07-15 21:32:11.346516] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:10:20.786 [2024-07-15 21:32:11.348710] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:20.786 [2024-07-15 21:32:11.348767] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:20.786 [2024-07-15 21:32:11.348872] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:10:20.786 [2024-07-15 21:32:11.348901] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:10:20.786 [2024-07-15 21:32:11.348911] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:10:20.786 [2024-07-15 21:32:11.349690] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:10:20.786 [2024-07-15 21:32:11.349709] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:10:20.786 [2024-07-15 21:32:11.349721] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:10:20.786 [2024-07-15 21:32:11.350693] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:20.786 [2024-07-15 21:32:11.350711] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:10:20.786 [2024-07-15 21:32:11.350724] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:10:20.786 [2024-07-15 21:32:11.351695] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:10:20.786 [2024-07-15 21:32:11.351712] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:20.786 [2024-07-15 21:32:11.352703] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:10:20.786 [2024-07-15 21:32:11.352720] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:10:20.786 [2024-07-15 21:32:11.352729] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:10:20.786 [2024-07-15 21:32:11.352740] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:20.786 [2024-07-15 21:32:11.352849] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:10:20.786 [2024-07-15 21:32:11.352857] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:20.786 [2024-07-15 21:32:11.352866] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:10:20.786 [2024-07-15 21:32:11.353709] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:10:20.786 [2024-07-15 21:32:11.354720] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:10:20.786 [2024-07-15 21:32:11.355732] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:20.786 [2024-07-15 21:32:11.356737] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:20.786 [2024-07-15 21:32:11.357013] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:20.786 [2024-07-15 21:32:11.357732] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:10:20.786 [2024-07-15 21:32:11.357748] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:20.786 [2024-07-15 21:32:11.357757] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:10:20.786 [2024-07-15 21:32:11.357781] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:10:20.786 [2024-07-15 21:32:11.357798] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:10:20.786 [2024-07-15 21:32:11.357823] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:20.786 [2024-07-15 21:32:11.357832] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:20.786 [2024-07-15 21:32:11.357851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:20.786 [2024-07-15 21:32:11.357927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:20.786 [2024-07-15 21:32:11.357944] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:10:20.786 [2024-07-15 21:32:11.357955] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:10:20.786 [2024-07-15 21:32:11.357963] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:10:20.786 [2024-07-15 21:32:11.357971] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:20.786 [2024-07-15 21:32:11.357978] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:10:20.786 [2024-07-15 21:32:11.357986] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:10:20.786 [2024-07-15 21:32:11.357994] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:10:20.786 [2024-07-15 21:32:11.358007] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:10:20.786 [2024-07-15 21:32:11.358029] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:20.786 [2024-07-15 21:32:11.358043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:20.786 [2024-07-15 21:32:11.358066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:20.786 [2024-07-15 21:32:11.358079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:20.786 [2024-07-15 21:32:11.358091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:20.786 [2024-07-15 21:32:11.358103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:20.786 [2024-07-15 21:32:11.358112] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:10:20.786 [2024-07-15 21:32:11.358126] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:20.786 [2024-07-15 21:32:11.358147] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:20.786 [2024-07-15 21:32:11.358160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:20.786 [2024-07-15 21:32:11.358170] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:10:20.786 [2024-07-15 21:32:11.358179] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:20.786 [2024-07-15 21:32:11.358190] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:10:20.786 [2024-07-15 21:32:11.358200] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:10:20.786 [2024-07-15 21:32:11.358213] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:20.786 [2024-07-15 21:32:11.358231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:20.786 [2024-07-15 21:32:11.358297] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:10:20.787 [2024-07-15 21:32:11.358312] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:10:20.787 [2024-07-15 21:32:11.358325] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:20.787 [2024-07-15 21:32:11.358333] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:20.787 [2024-07-15 21:32:11.358342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:20.787 [2024-07-15 21:32:11.358358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:20.787 [2024-07-15 21:32:11.358375] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:10:20.787 [2024-07-15 21:32:11.358390] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:10:20.787 [2024-07-15 21:32:11.358404] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:10:20.787 [2024-07-15 21:32:11.358416] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:20.787 [2024-07-15 21:32:11.358424] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:20.787 [2024-07-15 21:32:11.358433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:20.787 [2024-07-15 21:32:11.358456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:20.787 [2024-07-15 21:32:11.358477] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:20.787 [2024-07-15 21:32:11.358491] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:20.787 [2024-07-15 21:32:11.358503] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:20.787 [2024-07-15 21:32:11.358511] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:20.787 [2024-07-15 21:32:11.358520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:20.787 [2024-07-15 21:32:11.358536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:20.787 [2024-07-15 21:32:11.358549] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:20.787 [2024-07-15 21:32:11.358561] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:10:20.787 [2024-07-15 21:32:11.358574] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:10:20.787 [2024-07-15 21:32:11.358584] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:10:20.787 [2024-07-15 21:32:11.358592] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:20.787 [2024-07-15 21:32:11.358601] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:10:20.787 [2024-07-15 21:32:11.358613] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:10:20.787 [2024-07-15 21:32:11.358621] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:10:20.787 [2024-07-15 21:32:11.358629] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:10:20.787 [2024-07-15 21:32:11.358655] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:20.787 [2024-07-15 21:32:11.358672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:20.787 [2024-07-15 21:32:11.358690] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:20.787 [2024-07-15 21:32:11.358701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:20.787 [2024-07-15 21:32:11.358717] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:20.787 [2024-07-15 21:32:11.358729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:20.787 [2024-07-15 21:32:11.358745] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:20.787 [2024-07-15 21:32:11.358756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:20.787 [2024-07-15 21:32:11.358778] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:20.787 [2024-07-15 21:32:11.358788] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:20.787 [2024-07-15 21:32:11.358799] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:20.787 [2024-07-15 21:32:11.358805] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:20.787 [2024-07-15 21:32:11.358815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:20.787 [2024-07-15 21:32:11.358826] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:20.787 [2024-07-15 21:32:11.358834] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:20.787 [2024-07-15 21:32:11.358843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:20.787 [2024-07-15 21:32:11.358854] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:20.787 [2024-07-15 21:32:11.358861] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:20.787 [2024-07-15 21:32:11.358870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:20.787 [2024-07-15 21:32:11.358885] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:20.787 [2024-07-15 21:32:11.358892] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:20.787 [2024-07-15 21:32:11.358901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:20.787 [2024-07-15 21:32:11.358913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:20.787 [2024-07-15 21:32:11.358932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:20.787 [2024-07-15 21:32:11.358956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:20.787 [2024-07-15 21:32:11.358971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:20.787 ===================================================== 00:10:20.787 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:20.787 ===================================================== 00:10:20.787 Controller Capabilities/Features 00:10:20.787 ================================ 00:10:20.787 Vendor ID: 4e58 00:10:20.787 Subsystem Vendor ID: 4e58 00:10:20.787 Serial Number: SPDK1 00:10:20.787 Model Number: SPDK bdev Controller 00:10:20.787 Firmware Version: 24.09 00:10:20.787 Recommended Arb Burst: 6 00:10:20.787 IEEE OUI Identifier: 8d 6b 50 00:10:20.787 Multi-path I/O 00:10:20.787 May have multiple subsystem ports: Yes 00:10:20.787 May have multiple controllers: Yes 00:10:20.787 Associated with SR-IOV VF: No 00:10:20.787 Max Data Transfer Size: 131072 00:10:20.787 Max Number of Namespaces: 32 00:10:20.787 Max Number of I/O Queues: 127 00:10:20.787 NVMe Specification Version (VS): 1.3 00:10:20.787 NVMe Specification Version (Identify): 1.3 00:10:20.787 Maximum Queue Entries: 256 00:10:20.787 Contiguous Queues Required: Yes 00:10:20.787 Arbitration Mechanisms Supported 00:10:20.787 Weighted Round Robin: Not Supported 00:10:20.787 Vendor Specific: Not Supported 00:10:20.787 Reset Timeout: 15000 ms 00:10:20.787 Doorbell Stride: 4 bytes 00:10:20.787 NVM Subsystem Reset: Not Supported 00:10:20.787 Command Sets Supported 00:10:20.787 NVM Command Set: Supported 00:10:20.787 Boot Partition: Not Supported 00:10:20.787 Memory Page Size Minimum: 4096 bytes 00:10:20.787 Memory Page Size Maximum: 4096 bytes 00:10:20.787 Persistent Memory Region: Not Supported 00:10:20.787 Optional Asynchronous Events Supported 00:10:20.787 Namespace Attribute Notices: Supported 00:10:20.787 Firmware Activation Notices: Not Supported 00:10:20.787 ANA Change Notices: Not Supported 00:10:20.787 PLE Aggregate Log Change Notices: Not Supported 00:10:20.787 LBA Status Info Alert Notices: Not Supported 00:10:20.787 EGE Aggregate Log Change Notices: Not Supported 00:10:20.787 Normal NVM Subsystem Shutdown event: Not Supported 00:10:20.787 Zone Descriptor Change Notices: Not Supported 00:10:20.787 Discovery Log Change Notices: Not Supported 00:10:20.787 Controller Attributes 00:10:20.787 128-bit Host Identifier: Supported 00:10:20.787 Non-Operational Permissive Mode: Not Supported 00:10:20.787 NVM Sets: Not Supported 00:10:20.787 Read Recovery Levels: Not Supported 00:10:20.787 Endurance Groups: Not Supported 00:10:20.787 Predictable Latency Mode: Not Supported 00:10:20.787 Traffic Based Keep ALive: Not Supported 00:10:20.787 Namespace Granularity: Not Supported 00:10:20.787 SQ Associations: Not Supported 00:10:20.787 UUID List: Not Supported 00:10:20.787 Multi-Domain Subsystem: Not Supported 00:10:20.787 Fixed Capacity Management: Not Supported 00:10:20.787 Variable Capacity Management: Not Supported 00:10:20.787 Delete Endurance Group: Not Supported 00:10:20.787 Delete NVM Set: Not Supported 00:10:20.787 Extended LBA Formats Supported: Not Supported 00:10:20.787 Flexible Data Placement Supported: Not Supported 00:10:20.787 00:10:20.787 Controller Memory Buffer Support 00:10:20.787 ================================ 00:10:20.787 Supported: No 00:10:20.787 00:10:20.787 Persistent Memory Region Support 00:10:20.787 ================================ 00:10:20.787 Supported: No 00:10:20.787 00:10:20.787 Admin Command Set Attributes 00:10:20.787 ============================ 00:10:20.787 Security Send/Receive: Not Supported 00:10:20.787 Format NVM: Not Supported 00:10:20.787 Firmware Activate/Download: Not Supported 00:10:20.787 Namespace Management: Not Supported 00:10:20.787 Device Self-Test: Not Supported 00:10:20.787 Directives: Not Supported 00:10:20.788 NVMe-MI: Not Supported 00:10:20.788 Virtualization Management: Not Supported 00:10:20.788 Doorbell Buffer Config: Not Supported 00:10:20.788 Get LBA Status Capability: Not Supported 00:10:20.788 Command & Feature Lockdown Capability: Not Supported 00:10:20.788 Abort Command Limit: 4 00:10:20.788 Async Event Request Limit: 4 00:10:20.788 Number of Firmware Slots: N/A 00:10:20.788 Firmware Slot 1 Read-Only: N/A 00:10:20.788 Firmware Activation Without Reset: N/A 00:10:20.788 Multiple Update Detection Support: N/A 00:10:20.788 Firmware Update Granularity: No Information Provided 00:10:20.788 Per-Namespace SMART Log: No 00:10:20.788 Asymmetric Namespace Access Log Page: Not Supported 00:10:20.788 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:10:20.788 Command Effects Log Page: Supported 00:10:20.788 Get Log Page Extended Data: Supported 00:10:20.788 Telemetry Log Pages: Not Supported 00:10:20.788 Persistent Event Log Pages: Not Supported 00:10:20.788 Supported Log Pages Log Page: May Support 00:10:20.788 Commands Supported & Effects Log Page: Not Supported 00:10:20.788 Feature Identifiers & Effects Log Page:May Support 00:10:20.788 NVMe-MI Commands & Effects Log Page: May Support 00:10:20.788 Data Area 4 for Telemetry Log: Not Supported 00:10:20.788 Error Log Page Entries Supported: 128 00:10:20.788 Keep Alive: Supported 00:10:20.788 Keep Alive Granularity: 10000 ms 00:10:20.788 00:10:20.788 NVM Command Set Attributes 00:10:20.788 ========================== 00:10:20.788 Submission Queue Entry Size 00:10:20.788 Max: 64 00:10:20.788 Min: 64 00:10:20.788 Completion Queue Entry Size 00:10:20.788 Max: 16 00:10:20.788 Min: 16 00:10:20.788 Number of Namespaces: 32 00:10:20.788 Compare Command: Supported 00:10:20.788 Write Uncorrectable Command: Not Supported 00:10:20.788 Dataset Management Command: Supported 00:10:20.788 Write Zeroes Command: Supported 00:10:20.788 Set Features Save Field: Not Supported 00:10:20.788 Reservations: Not Supported 00:10:20.788 Timestamp: Not Supported 00:10:20.788 Copy: Supported 00:10:20.788 Volatile Write Cache: Present 00:10:20.788 Atomic Write Unit (Normal): 1 00:10:20.788 Atomic Write Unit (PFail): 1 00:10:20.788 Atomic Compare & Write Unit: 1 00:10:20.788 Fused Compare & Write: Supported 00:10:20.788 Scatter-Gather List 00:10:20.788 SGL Command Set: Supported (Dword aligned) 00:10:20.788 SGL Keyed: Not Supported 00:10:20.788 SGL Bit Bucket Descriptor: Not Supported 00:10:20.788 SGL Metadata Pointer: Not Supported 00:10:20.788 Oversized SGL: Not Supported 00:10:20.788 SGL Metadata Address: Not Supported 00:10:20.788 SGL Offset: Not Supported 00:10:20.788 Transport SGL Data Block: Not Supported 00:10:20.788 Replay Protected Memory Block: Not Supported 00:10:20.788 00:10:20.788 Firmware Slot Information 00:10:20.788 ========================= 00:10:20.788 Active slot: 1 00:10:20.788 Slot 1 Firmware Revision: 24.09 00:10:20.788 00:10:20.788 00:10:20.788 Commands Supported and Effects 00:10:20.788 ============================== 00:10:20.788 Admin Commands 00:10:20.788 -------------- 00:10:20.788 Get Log Page (02h): Supported 00:10:20.788 Identify (06h): Supported 00:10:20.788 Abort (08h): Supported 00:10:20.788 Set Features (09h): Supported 00:10:20.788 Get Features (0Ah): Supported 00:10:20.788 Asynchronous Event Request (0Ch): Supported 00:10:20.788 Keep Alive (18h): Supported 00:10:20.788 I/O Commands 00:10:20.788 ------------ 00:10:20.788 Flush (00h): Supported LBA-Change 00:10:20.788 Write (01h): Supported LBA-Change 00:10:20.788 Read (02h): Supported 00:10:20.788 Compare (05h): Supported 00:10:20.788 Write Zeroes (08h): Supported LBA-Change 00:10:20.788 Dataset Management (09h): Supported LBA-Change 00:10:20.788 Copy (19h): Supported LBA-Change 00:10:20.788 00:10:20.788 Error Log 00:10:20.788 ========= 00:10:20.788 00:10:20.788 Arbitration 00:10:20.788 =========== 00:10:20.788 Arbitration Burst: 1 00:10:20.788 00:10:20.788 Power Management 00:10:20.788 ================ 00:10:20.788 Number of Power States: 1 00:10:20.788 Current Power State: Power State #0 00:10:20.788 Power State #0: 00:10:20.788 Max Power: 0.00 W 00:10:20.788 Non-Operational State: Operational 00:10:20.788 Entry Latency: Not Reported 00:10:20.788 Exit Latency: Not Reported 00:10:20.788 Relative Read Throughput: 0 00:10:20.788 Relative Read Latency: 0 00:10:20.788 Relative Write Throughput: 0 00:10:20.788 Relative Write Latency: 0 00:10:20.788 Idle Power: Not Reported 00:10:20.788 Active Power: Not Reported 00:10:20.788 Non-Operational Permissive Mode: Not Supported 00:10:20.788 00:10:20.788 Health Information 00:10:20.788 ================== 00:10:20.788 Critical Warnings: 00:10:20.788 Available Spare Space: OK 00:10:20.788 Temperature: OK 00:10:20.788 Device Reliability: OK 00:10:20.788 Read Only: No 00:10:20.788 Volatile Memory Backup: OK 00:10:20.788 Current Temperature: 0 Kelvin (-273 Celsius) 00:10:20.788 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:20.788 Available Spare: 0% 00:10:20.788 Available Sp[2024-07-15 21:32:11.359092] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:20.788 [2024-07-15 21:32:11.359111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:20.788 [2024-07-15 21:32:11.359163] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:10:20.788 [2024-07-15 21:32:11.359181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.788 [2024-07-15 21:32:11.359192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.788 [2024-07-15 21:32:11.359203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.788 [2024-07-15 21:32:11.359218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:20.788 [2024-07-15 21:32:11.359742] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:20.788 [2024-07-15 21:32:11.359761] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:10:20.788 [2024-07-15 21:32:11.360761] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:20.788 [2024-07-15 21:32:11.360838] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:10:20.788 [2024-07-15 21:32:11.360851] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:10:20.788 [2024-07-15 21:32:11.361762] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:10:20.788 [2024-07-15 21:32:11.361784] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:10:20.788 [2024-07-15 21:32:11.361856] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:10:20.788 [2024-07-15 21:32:11.366149] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:20.788 are Threshold: 0% 00:10:20.788 Life Percentage Used: 0% 00:10:20.788 Data Units Read: 0 00:10:20.788 Data Units Written: 0 00:10:20.788 Host Read Commands: 0 00:10:20.788 Host Write Commands: 0 00:10:20.788 Controller Busy Time: 0 minutes 00:10:20.788 Power Cycles: 0 00:10:20.788 Power On Hours: 0 hours 00:10:20.788 Unsafe Shutdowns: 0 00:10:20.788 Unrecoverable Media Errors: 0 00:10:20.788 Lifetime Error Log Entries: 0 00:10:20.788 Warning Temperature Time: 0 minutes 00:10:20.788 Critical Temperature Time: 0 minutes 00:10:20.788 00:10:20.788 Number of Queues 00:10:20.788 ================ 00:10:20.788 Number of I/O Submission Queues: 127 00:10:20.788 Number of I/O Completion Queues: 127 00:10:20.788 00:10:20.788 Active Namespaces 00:10:20.788 ================= 00:10:20.788 Namespace ID:1 00:10:20.788 Error Recovery Timeout: Unlimited 00:10:20.788 Command Set Identifier: NVM (00h) 00:10:20.788 Deallocate: Supported 00:10:20.788 Deallocated/Unwritten Error: Not Supported 00:10:20.788 Deallocated Read Value: Unknown 00:10:20.788 Deallocate in Write Zeroes: Not Supported 00:10:20.788 Deallocated Guard Field: 0xFFFF 00:10:20.788 Flush: Supported 00:10:20.788 Reservation: Supported 00:10:20.788 Namespace Sharing Capabilities: Multiple Controllers 00:10:20.788 Size (in LBAs): 131072 (0GiB) 00:10:20.788 Capacity (in LBAs): 131072 (0GiB) 00:10:20.788 Utilization (in LBAs): 131072 (0GiB) 00:10:20.788 NGUID: 038FD1520437432F9CAEC0B56D15FD75 00:10:20.788 UUID: 038fd152-0437-432f-9cae-c0b56d15fd75 00:10:20.788 Thin Provisioning: Not Supported 00:10:20.788 Per-NS Atomic Units: Yes 00:10:20.788 Atomic Boundary Size (Normal): 0 00:10:20.788 Atomic Boundary Size (PFail): 0 00:10:20.788 Atomic Boundary Offset: 0 00:10:20.788 Maximum Single Source Range Length: 65535 00:10:20.788 Maximum Copy Length: 65535 00:10:20.788 Maximum Source Range Count: 1 00:10:20.788 NGUID/EUI64 Never Reused: No 00:10:20.788 Namespace Write Protected: No 00:10:20.788 Number of LBA Formats: 1 00:10:20.788 Current LBA Format: LBA Format #00 00:10:20.788 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:20.788 00:10:20.788 21:32:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:20.788 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.788 [2024-07-15 21:32:11.568928] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:26.051 Initializing NVMe Controllers 00:10:26.051 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:26.051 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:26.051 Initialization complete. Launching workers. 00:10:26.051 ======================================================== 00:10:26.051 Latency(us) 00:10:26.051 Device Information : IOPS MiB/s Average min max 00:10:26.051 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 32346.00 126.35 3956.86 1236.16 10059.01 00:10:26.051 ======================================================== 00:10:26.051 Total : 32346.00 126.35 3956.86 1236.16 10059.01 00:10:26.051 00:10:26.051 [2024-07-15 21:32:16.592228] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:26.051 21:32:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:26.051 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.051 [2024-07-15 21:32:16.806215] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:31.312 Initializing NVMe Controllers 00:10:31.312 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:31.312 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:31.312 Initialization complete. Launching workers. 00:10:31.312 ======================================================== 00:10:31.312 Latency(us) 00:10:31.312 Device Information : IOPS MiB/s Average min max 00:10:31.312 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7984.43 6983.95 11980.96 00:10:31.312 ======================================================== 00:10:31.312 Total : 16051.20 62.70 7984.43 6983.95 11980.96 00:10:31.312 00:10:31.312 [2024-07-15 21:32:21.842804] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:31.312 21:32:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:31.312 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.312 [2024-07-15 21:32:22.054748] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:36.565 [2024-07-15 21:32:27.119376] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:36.565 Initializing NVMe Controllers 00:10:36.565 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:36.565 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:36.565 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:10:36.565 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:10:36.565 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:10:36.565 Initialization complete. Launching workers. 00:10:36.565 Starting thread on core 2 00:10:36.565 Starting thread on core 3 00:10:36.565 Starting thread on core 1 00:10:36.565 21:32:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:10:36.565 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.823 [2024-07-15 21:32:27.375336] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:40.102 [2024-07-15 21:32:30.434403] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:40.102 Initializing NVMe Controllers 00:10:40.102 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:40.102 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:40.102 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:10:40.102 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:10:40.102 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:10:40.102 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:10:40.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:40.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:40.102 Initialization complete. Launching workers. 00:10:40.102 Starting thread on core 1 with urgent priority queue 00:10:40.102 Starting thread on core 2 with urgent priority queue 00:10:40.102 Starting thread on core 3 with urgent priority queue 00:10:40.102 Starting thread on core 0 with urgent priority queue 00:10:40.102 SPDK bdev Controller (SPDK1 ) core 0: 6631.67 IO/s 15.08 secs/100000 ios 00:10:40.102 SPDK bdev Controller (SPDK1 ) core 1: 6635.33 IO/s 15.07 secs/100000 ios 00:10:40.102 SPDK bdev Controller (SPDK1 ) core 2: 8616.33 IO/s 11.61 secs/100000 ios 00:10:40.102 SPDK bdev Controller (SPDK1 ) core 3: 8116.00 IO/s 12.32 secs/100000 ios 00:10:40.102 ======================================================== 00:10:40.102 00:10:40.102 21:32:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:40.102 EAL: No free 2048 kB hugepages reported on node 1 00:10:40.102 [2024-07-15 21:32:30.692678] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:40.102 Initializing NVMe Controllers 00:10:40.102 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:40.102 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:40.102 Namespace ID: 1 size: 0GB 00:10:40.102 Initialization complete. 00:10:40.102 INFO: using host memory buffer for IO 00:10:40.102 Hello world! 00:10:40.102 [2024-07-15 21:32:30.726200] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:40.102 21:32:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:40.102 EAL: No free 2048 kB hugepages reported on node 1 00:10:40.359 [2024-07-15 21:32:30.978612] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:41.290 Initializing NVMe Controllers 00:10:41.290 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:41.290 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:41.290 Initialization complete. Launching workers. 00:10:41.290 submit (in ns) avg, min, max = 7718.5, 3806.7, 4008710.0 00:10:41.290 complete (in ns) avg, min, max = 23568.0, 2227.8, 7002134.4 00:10:41.290 00:10:41.290 Submit histogram 00:10:41.290 ================ 00:10:41.290 Range in us Cumulative Count 00:10:41.290 3.793 - 3.816: 0.1015% ( 14) 00:10:41.290 3.816 - 3.840: 6.3854% ( 867) 00:10:41.290 3.840 - 3.864: 21.5482% ( 2092) 00:10:41.290 3.864 - 3.887: 45.2780% ( 3274) 00:10:41.290 3.887 - 3.911: 63.7457% ( 2548) 00:10:41.290 3.911 - 3.935: 76.1542% ( 1712) 00:10:41.290 3.935 - 3.959: 83.7791% ( 1052) 00:10:41.290 3.959 - 3.982: 87.6495% ( 534) 00:10:41.290 3.982 - 4.006: 88.9614% ( 181) 00:10:41.290 4.006 - 4.030: 90.3602% ( 193) 00:10:41.290 4.030 - 4.053: 92.6506% ( 316) 00:10:41.290 4.053 - 4.077: 94.8902% ( 309) 00:10:41.290 4.077 - 4.101: 96.3543% ( 202) 00:10:41.290 4.101 - 4.124: 97.2023% ( 117) 00:10:41.291 4.124 - 4.148: 97.8329% ( 87) 00:10:41.291 4.148 - 4.172: 97.9706% ( 19) 00:10:41.291 4.172 - 4.196: 98.0720% ( 14) 00:10:41.291 4.196 - 4.219: 98.1373% ( 9) 00:10:41.291 4.219 - 4.243: 98.1735% ( 5) 00:10:41.291 4.243 - 4.267: 98.2025% ( 4) 00:10:41.291 4.267 - 4.290: 98.2460% ( 6) 00:10:41.291 4.290 - 4.314: 98.3185% ( 10) 00:10:41.291 4.314 - 4.338: 98.4199% ( 14) 00:10:41.291 4.338 - 4.361: 98.5069% ( 12) 00:10:41.291 4.361 - 4.385: 98.6591% ( 21) 00:10:41.291 4.385 - 4.409: 98.8113% ( 21) 00:10:41.291 4.409 - 4.433: 98.8548% ( 6) 00:10:41.291 4.433 - 4.456: 98.8766% ( 3) 00:10:41.291 4.456 - 4.480: 98.8838% ( 1) 00:10:41.291 4.480 - 4.504: 98.9128% ( 4) 00:10:41.291 4.504 - 4.527: 98.9346% ( 3) 00:10:41.291 4.575 - 4.599: 98.9563% ( 3) 00:10:41.291 4.622 - 4.646: 98.9635% ( 1) 00:10:41.291 4.670 - 4.693: 98.9708% ( 1) 00:10:41.291 4.693 - 4.717: 98.9998% ( 4) 00:10:41.291 4.717 - 4.741: 99.0143% ( 2) 00:10:41.291 4.741 - 4.764: 99.0215% ( 1) 00:10:41.291 4.788 - 4.812: 99.0288% ( 1) 00:10:41.291 4.836 - 4.859: 99.0505% ( 3) 00:10:41.291 4.859 - 4.883: 99.0650% ( 2) 00:10:41.291 4.907 - 4.930: 99.0795% ( 2) 00:10:41.291 4.978 - 5.001: 99.0868% ( 1) 00:10:41.291 5.001 - 5.025: 99.0940% ( 1) 00:10:41.291 5.167 - 5.191: 99.1085% ( 2) 00:10:41.291 5.191 - 5.215: 99.1157% ( 1) 00:10:41.291 5.239 - 5.262: 99.1302% ( 2) 00:10:41.291 5.286 - 5.310: 99.1375% ( 1) 00:10:41.291 5.333 - 5.357: 99.1447% ( 1) 00:10:41.291 5.428 - 5.452: 99.1520% ( 1) 00:10:41.291 5.523 - 5.547: 99.1592% ( 1) 00:10:41.291 5.594 - 5.618: 99.1665% ( 1) 00:10:41.291 5.689 - 5.713: 99.1737% ( 1) 00:10:41.291 6.116 - 6.163: 99.1810% ( 1) 00:10:41.291 6.163 - 6.210: 99.1882% ( 1) 00:10:41.291 6.258 - 6.305: 99.1955% ( 1) 00:10:41.291 6.400 - 6.447: 99.2100% ( 2) 00:10:41.291 6.447 - 6.495: 99.2245% ( 2) 00:10:41.291 6.542 - 6.590: 99.2317% ( 1) 00:10:41.291 6.684 - 6.732: 99.2390% ( 1) 00:10:41.291 6.779 - 6.827: 99.2462% ( 1) 00:10:41.291 6.827 - 6.874: 99.2535% ( 1) 00:10:41.291 6.874 - 6.921: 99.2680% ( 2) 00:10:41.291 6.921 - 6.969: 99.2897% ( 3) 00:10:41.291 6.969 - 7.016: 99.3187% ( 4) 00:10:41.291 7.016 - 7.064: 99.3259% ( 1) 00:10:41.291 7.159 - 7.206: 99.3404% ( 2) 00:10:41.291 7.206 - 7.253: 99.3477% ( 1) 00:10:41.291 7.348 - 7.396: 99.3549% ( 1) 00:10:41.291 7.443 - 7.490: 99.3694% ( 2) 00:10:41.291 7.538 - 7.585: 99.3839% ( 2) 00:10:41.291 7.585 - 7.633: 99.3912% ( 1) 00:10:41.291 7.633 - 7.680: 99.4274% ( 5) 00:10:41.291 7.680 - 7.727: 99.4419% ( 2) 00:10:41.291 7.917 - 7.964: 99.4637% ( 3) 00:10:41.291 7.964 - 8.012: 99.4709% ( 1) 00:10:41.291 8.012 - 8.059: 99.4999% ( 4) 00:10:41.291 8.107 - 8.154: 99.5144% ( 2) 00:10:41.291 8.249 - 8.296: 99.5361% ( 3) 00:10:41.291 8.439 - 8.486: 99.5579% ( 3) 00:10:41.291 8.486 - 8.533: 99.5651% ( 1) 00:10:41.291 8.533 - 8.581: 99.5724% ( 1) 00:10:41.291 8.723 - 8.770: 99.5796% ( 1) 00:10:41.291 9.055 - 9.102: 99.5869% ( 1) 00:10:41.291 9.150 - 9.197: 99.5941% ( 1) 00:10:41.291 9.244 - 9.292: 99.6014% ( 1) 00:10:41.291 9.339 - 9.387: 99.6159% ( 2) 00:10:41.291 9.434 - 9.481: 99.6231% ( 1) 00:10:41.291 9.481 - 9.529: 99.6304% ( 1) 00:10:41.291 9.719 - 9.766: 99.6376% ( 1) 00:10:41.291 9.766 - 9.813: 99.6521% ( 2) 00:10:41.291 9.908 - 9.956: 99.6593% ( 1) 00:10:41.291 9.956 - 10.003: 99.6666% ( 1) 00:10:41.291 10.193 - 10.240: 99.6738% ( 1) 00:10:41.291 10.240 - 10.287: 99.6811% ( 1) 00:10:41.291 10.287 - 10.335: 99.6883% ( 1) 00:10:41.291 10.335 - 10.382: 99.7028% ( 2) 00:10:41.291 10.382 - 10.430: 99.7101% ( 1) 00:10:41.291 10.430 - 10.477: 99.7173% ( 1) 00:10:41.291 10.761 - 10.809: 99.7246% ( 1) 00:10:41.291 10.809 - 10.856: 99.7391% ( 2) 00:10:41.291 10.856 - 10.904: 99.7463% ( 1) 00:10:41.291 10.904 - 10.951: 99.7536% ( 1) 00:10:41.291 11.093 - 11.141: 99.7681% ( 2) 00:10:41.291 11.188 - 11.236: 99.7753% ( 1) 00:10:41.291 11.236 - 11.283: 99.7826% ( 1) 00:10:41.291 11.378 - 11.425: 99.7898% ( 1) 00:10:41.291 11.520 - 11.567: 99.7971% ( 1) 00:10:41.291 11.567 - 11.615: 99.8043% ( 1) 00:10:41.291 11.615 - 11.662: 99.8260% ( 3) 00:10:41.291 12.089 - 12.136: 99.8333% ( 1) 00:10:41.291 12.231 - 12.326: 99.8478% ( 2) 00:10:41.291 12.326 - 12.421: 99.8550% ( 1) 00:10:41.291 12.516 - 12.610: 99.8695% ( 2) 00:10:41.291 12.800 - 12.895: 99.8768% ( 1) 00:10:41.291 12.895 - 12.990: 99.8840% ( 1) 00:10:41.291 12.990 - 13.084: 99.8913% ( 1) 00:10:41.291 15.834 - 15.929: 99.8985% ( 1) 00:10:41.291 16.213 - 16.308: 99.9058% ( 1) 00:10:41.291 3980.705 - 4004.978: 99.9855% ( 11) 00:10:41.291 4004.978 - 4029.250: 100.0000% ( 2) 00:10:41.291 00:10:41.291 Complete histogram 00:10:41.291 ================== 00:10:41.291 Range in us Cumulative Count 00:10:41.291 2.216 - 2.228: 0.0072% ( 1) 00:10:41.291 2.228 - 2.240: 12.1258% ( 1672) 00:10:41.291 2.240 - 2.252: 52.6419% ( 5590) 00:10:41.291 2.252 - 2.264: 59.4767% ( 943) 00:10:41.291 2.264 - 2.276: 76.2557% ( 2315) 00:10:41.291 2.276 - 2.287: 89.8239% ( 1872) 00:10:41.291 2.287 - 2.299: 94.9482% ( 707) 00:10:41.291 2.299 - 2.311: 96.4992% ( 214) 00:10:41.291 2.311 - 2.323: 97.6009% ( 152) 00:10:41.291 2.323 - 2.335: 98.0503% ( 62) 00:10:41.291 2.335 - 2.347: 98.3910% ( 47) 00:10:41.291 2.347 - 2.359: 98.6736% ( 39) 00:10:41.291 2.359 - 2.370: 98.8113% ( 19) 00:10:41.291 2.370 - 2.382: 98.8476% ( 5) 00:10:41.291 2.382 - 2.394: 98.8548% ( 1) 00:10:41.291 2.406 - 2.418: 98.8693% ( 2) 00:10:41.291 2.418 - 2.430: 98.8983% ( 4) 00:10:41.291 2.430 - 2.441: 98.9201% ( 3) 00:10:41.291 2.441 - 2.453: 98.9418% ( 3) 00:10:41.291 2.453 - 2.465: 98.9563% ( 2) 00:10:41.291 2.465 - 2.477: 98.9780% ( 3) 00:10:41.291 2.477 - 2.489: 98.9853% ( 1) 00:10:41.291 2.501 - 2.513: 98.9925% ( 1) 00:10:41.291 2.513 - 2.524: 98.9998% ( 1) 00:10:41.291 2.536 - 2.548: 99.0070% ( 1) 00:10:41.291 2.548 - 2.560: 99.0143% ( 1) 00:10:41.291 2.714 - 2.726: 99.0288% ( 2) 00:10:41.291 2.750 - 2.761: 99.0360% ( 1) 00:10:41.291 2.785 - 2.797: 99.0433% ( 1) 00:10:41.291 2.916 - 2.927: 99.0578% ( 2) 00:10:41.291 2.951 - 2.963: 99.0723% ( 2) 00:10:41.291 2.963 - 2.975: 99.0940% ( 3) 00:10:41.291 2.975 - 2.987: 99.1013% ( 1) 00:10:41.291 2.987 - 2.999: 99.1157% ( 2) 00:10:41.291 2.999 - 3.010: 99.1447% ( 4) 00:10:41.291 3.022 - 3.034: 99.1665% ( 3) 00:10:41.291 3.034 - 3.058: 99.2100% ( 6) 00:10:41.291 3.058 - 3.081: 99.2245% ( 2) 00:10:41.291 3.081 - 3.105: 99.2535% ( 4) 00:10:41.291 3.129 - 3.153: 99.2680% ( 2) 00:10:41.291 3.153 - 3.176: 99.2752% ( 1) 00:10:41.291 3.319 - 3.342: 99.2825% ( 1) 00:10:41.291 3.390 - 3.413: 99.2897% ( 1) 00:10:41.291 3.461 - 3.484: 99.2969% ( 1) 00:10:41.291 3.532 - 3.5[2024-07-15 21:32:31.997021] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:41.291 56: 99.3042% ( 1) 00:10:41.291 3.674 - 3.698: 99.3114% ( 1) 00:10:41.291 4.124 - 4.148: 99.3187% ( 1) 00:10:41.291 4.290 - 4.314: 99.3259% ( 1) 00:10:41.291 5.167 - 5.191: 99.3332% ( 1) 00:10:41.291 5.333 - 5.357: 99.3404% ( 1) 00:10:41.291 5.404 - 5.428: 99.3477% ( 1) 00:10:41.291 5.618 - 5.641: 99.3549% ( 1) 00:10:41.291 5.641 - 5.665: 99.3622% ( 1) 00:10:41.291 5.689 - 5.713: 99.3694% ( 1) 00:10:41.291 5.784 - 5.807: 99.3767% ( 1) 00:10:41.291 5.855 - 5.879: 99.3839% ( 1) 00:10:41.291 6.210 - 6.258: 99.3912% ( 1) 00:10:41.291 6.447 - 6.495: 99.3984% ( 1) 00:10:41.291 6.590 - 6.637: 99.4057% ( 1) 00:10:41.291 6.827 - 6.874: 99.4129% ( 1) 00:10:41.291 6.969 - 7.016: 99.4274% ( 2) 00:10:41.291 7.016 - 7.064: 99.4347% ( 1) 00:10:41.291 7.348 - 7.396: 99.4419% ( 1) 00:10:41.291 7.585 - 7.633: 99.4492% ( 1) 00:10:41.291 12.990 - 13.084: 99.4564% ( 1) 00:10:41.291 13.938 - 14.033: 99.4637% ( 1) 00:10:41.291 19.437 - 19.532: 99.4709% ( 1) 00:10:41.291 3009.801 - 3021.938: 99.4781% ( 1) 00:10:41.291 3980.705 - 4004.978: 99.9058% ( 59) 00:10:41.291 4004.978 - 4029.250: 99.9928% ( 12) 00:10:41.291 6990.507 - 7039.052: 100.0000% ( 1) 00:10:41.291 00:10:41.291 21:32:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:10:41.291 21:32:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:41.291 21:32:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:10:41.291 21:32:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:10:41.291 21:32:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:41.549 [ 00:10:41.549 { 00:10:41.549 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:41.549 "subtype": "Discovery", 00:10:41.549 "listen_addresses": [], 00:10:41.549 "allow_any_host": true, 00:10:41.549 "hosts": [] 00:10:41.549 }, 00:10:41.549 { 00:10:41.549 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:41.549 "subtype": "NVMe", 00:10:41.549 "listen_addresses": [ 00:10:41.549 { 00:10:41.549 "trtype": "VFIOUSER", 00:10:41.549 "adrfam": "IPv4", 00:10:41.549 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:41.549 "trsvcid": "0" 00:10:41.549 } 00:10:41.549 ], 00:10:41.549 "allow_any_host": true, 00:10:41.549 "hosts": [], 00:10:41.549 "serial_number": "SPDK1", 00:10:41.549 "model_number": "SPDK bdev Controller", 00:10:41.549 "max_namespaces": 32, 00:10:41.549 "min_cntlid": 1, 00:10:41.549 "max_cntlid": 65519, 00:10:41.549 "namespaces": [ 00:10:41.549 { 00:10:41.549 "nsid": 1, 00:10:41.549 "bdev_name": "Malloc1", 00:10:41.549 "name": "Malloc1", 00:10:41.549 "nguid": "038FD1520437432F9CAEC0B56D15FD75", 00:10:41.549 "uuid": "038fd152-0437-432f-9cae-c0b56d15fd75" 00:10:41.549 } 00:10:41.549 ] 00:10:41.549 }, 00:10:41.549 { 00:10:41.549 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:41.549 "subtype": "NVMe", 00:10:41.549 "listen_addresses": [ 00:10:41.549 { 00:10:41.549 "trtype": "VFIOUSER", 00:10:41.549 "adrfam": "IPv4", 00:10:41.549 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:41.549 "trsvcid": "0" 00:10:41.549 } 00:10:41.549 ], 00:10:41.549 "allow_any_host": true, 00:10:41.549 "hosts": [], 00:10:41.549 "serial_number": "SPDK2", 00:10:41.549 "model_number": "SPDK bdev Controller", 00:10:41.549 "max_namespaces": 32, 00:10:41.549 "min_cntlid": 1, 00:10:41.549 "max_cntlid": 65519, 00:10:41.549 "namespaces": [ 00:10:41.549 { 00:10:41.549 "nsid": 1, 00:10:41.549 "bdev_name": "Malloc2", 00:10:41.549 "name": "Malloc2", 00:10:41.549 "nguid": "E3308DC0ADE34A0AAFF0403F8B1F9DFE", 00:10:41.549 "uuid": "e3308dc0-ade3-4a0a-aff0-403f8b1f9dfe" 00:10:41.549 } 00:10:41.549 ] 00:10:41.549 } 00:10:41.549 ] 00:10:41.807 21:32:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:41.807 21:32:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=311530 00:10:41.807 21:32:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:41.807 21:32:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:10:41.807 21:32:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:41.807 21:32:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:41.807 21:32:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:10:41.807 21:32:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:10:41.807 21:32:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:10:41.807 EAL: No free 2048 kB hugepages reported on node 1 00:10:41.807 21:32:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:41.807 21:32:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:10:41.807 21:32:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:10:41.807 21:32:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:10:41.807 [2024-07-15 21:32:32.493544] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:41.807 21:32:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:41.808 21:32:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:41.808 21:32:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:41.808 21:32:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:41.808 21:32:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:10:42.373 Malloc3 00:10:42.373 21:32:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:10:42.373 [2024-07-15 21:32:33.139968] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:42.373 21:32:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:42.630 Asynchronous Event Request test 00:10:42.630 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:42.630 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:42.630 Registering asynchronous event callbacks... 00:10:42.630 Starting namespace attribute notice tests for all controllers... 00:10:42.630 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:42.630 aer_cb - Changed Namespace 00:10:42.630 Cleaning up... 00:10:42.630 [ 00:10:42.630 { 00:10:42.630 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:42.630 "subtype": "Discovery", 00:10:42.630 "listen_addresses": [], 00:10:42.630 "allow_any_host": true, 00:10:42.630 "hosts": [] 00:10:42.630 }, 00:10:42.630 { 00:10:42.630 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:42.630 "subtype": "NVMe", 00:10:42.630 "listen_addresses": [ 00:10:42.630 { 00:10:42.630 "trtype": "VFIOUSER", 00:10:42.630 "adrfam": "IPv4", 00:10:42.630 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:42.630 "trsvcid": "0" 00:10:42.630 } 00:10:42.630 ], 00:10:42.630 "allow_any_host": true, 00:10:42.630 "hosts": [], 00:10:42.630 "serial_number": "SPDK1", 00:10:42.630 "model_number": "SPDK bdev Controller", 00:10:42.630 "max_namespaces": 32, 00:10:42.630 "min_cntlid": 1, 00:10:42.630 "max_cntlid": 65519, 00:10:42.630 "namespaces": [ 00:10:42.630 { 00:10:42.630 "nsid": 1, 00:10:42.630 "bdev_name": "Malloc1", 00:10:42.630 "name": "Malloc1", 00:10:42.630 "nguid": "038FD1520437432F9CAEC0B56D15FD75", 00:10:42.630 "uuid": "038fd152-0437-432f-9cae-c0b56d15fd75" 00:10:42.630 }, 00:10:42.630 { 00:10:42.630 "nsid": 2, 00:10:42.630 "bdev_name": "Malloc3", 00:10:42.630 "name": "Malloc3", 00:10:42.630 "nguid": "6679B6AA5AE9434492CF25896C819FE7", 00:10:42.630 "uuid": "6679b6aa-5ae9-4344-92cf-25896c819fe7" 00:10:42.630 } 00:10:42.630 ] 00:10:42.630 }, 00:10:42.630 { 00:10:42.630 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:42.630 "subtype": "NVMe", 00:10:42.630 "listen_addresses": [ 00:10:42.630 { 00:10:42.630 "trtype": "VFIOUSER", 00:10:42.630 "adrfam": "IPv4", 00:10:42.630 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:42.630 "trsvcid": "0" 00:10:42.630 } 00:10:42.630 ], 00:10:42.630 "allow_any_host": true, 00:10:42.630 "hosts": [], 00:10:42.630 "serial_number": "SPDK2", 00:10:42.630 "model_number": "SPDK bdev Controller", 00:10:42.630 "max_namespaces": 32, 00:10:42.630 "min_cntlid": 1, 00:10:42.630 "max_cntlid": 65519, 00:10:42.630 "namespaces": [ 00:10:42.630 { 00:10:42.630 "nsid": 1, 00:10:42.630 "bdev_name": "Malloc2", 00:10:42.630 "name": "Malloc2", 00:10:42.630 "nguid": "E3308DC0ADE34A0AAFF0403F8B1F9DFE", 00:10:42.630 "uuid": "e3308dc0-ade3-4a0a-aff0-403f8b1f9dfe" 00:10:42.630 } 00:10:42.630 ] 00:10:42.630 } 00:10:42.630 ] 00:10:42.630 21:32:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 311530 00:10:42.630 21:32:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:42.630 21:32:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:42.630 21:32:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:10:42.630 21:32:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:42.630 [2024-07-15 21:32:33.412915] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:10:42.630 [2024-07-15 21:32:33.412961] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid311637 ] 00:10:42.889 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.889 [2024-07-15 21:32:33.447900] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:10:42.889 [2024-07-15 21:32:33.455420] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:42.889 [2024-07-15 21:32:33.455447] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fdddb225000 00:10:42.889 [2024-07-15 21:32:33.456415] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:42.889 [2024-07-15 21:32:33.457415] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:42.889 [2024-07-15 21:32:33.458428] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:42.889 [2024-07-15 21:32:33.459433] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:42.889 [2024-07-15 21:32:33.460440] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:42.889 [2024-07-15 21:32:33.461448] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:42.889 [2024-07-15 21:32:33.462449] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:42.889 [2024-07-15 21:32:33.463456] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:42.889 [2024-07-15 21:32:33.464467] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:42.889 [2024-07-15 21:32:33.464486] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fdddb21a000 00:10:42.889 [2024-07-15 21:32:33.465703] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:42.889 [2024-07-15 21:32:33.486461] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:10:42.889 [2024-07-15 21:32:33.486494] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:10:42.889 [2024-07-15 21:32:33.488577] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:42.889 [2024-07-15 21:32:33.488631] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:42.889 [2024-07-15 21:32:33.488723] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:10:42.889 [2024-07-15 21:32:33.488752] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:10:42.889 [2024-07-15 21:32:33.488766] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:10:42.889 [2024-07-15 21:32:33.489584] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:10:42.889 [2024-07-15 21:32:33.489604] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:10:42.889 [2024-07-15 21:32:33.489616] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:10:42.889 [2024-07-15 21:32:33.490586] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:42.889 [2024-07-15 21:32:33.490604] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:10:42.889 [2024-07-15 21:32:33.490618] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:10:42.889 [2024-07-15 21:32:33.491594] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:10:42.889 [2024-07-15 21:32:33.491612] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:42.889 [2024-07-15 21:32:33.492601] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:10:42.889 [2024-07-15 21:32:33.492619] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:10:42.889 [2024-07-15 21:32:33.492628] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:10:42.889 [2024-07-15 21:32:33.492640] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:42.889 [2024-07-15 21:32:33.492749] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:10:42.889 [2024-07-15 21:32:33.492757] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:42.889 [2024-07-15 21:32:33.492766] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:10:42.889 [2024-07-15 21:32:33.493613] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:10:42.889 [2024-07-15 21:32:33.494626] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:10:42.889 [2024-07-15 21:32:33.495632] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:42.889 [2024-07-15 21:32:33.496625] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:42.889 [2024-07-15 21:32:33.496687] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:42.889 [2024-07-15 21:32:33.497637] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:10:42.889 [2024-07-15 21:32:33.497655] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:42.889 [2024-07-15 21:32:33.497664] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:10:42.889 [2024-07-15 21:32:33.497692] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:10:42.889 [2024-07-15 21:32:33.497705] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:10:42.889 [2024-07-15 21:32:33.497729] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:42.889 [2024-07-15 21:32:33.497739] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:42.889 [2024-07-15 21:32:33.497758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:42.889 [2024-07-15 21:32:33.504150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:42.889 [2024-07-15 21:32:33.504172] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:10:42.890 [2024-07-15 21:32:33.504185] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:10:42.890 [2024-07-15 21:32:33.504194] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:10:42.890 [2024-07-15 21:32:33.504201] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:42.890 [2024-07-15 21:32:33.504209] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:10:42.890 [2024-07-15 21:32:33.504217] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:10:42.890 [2024-07-15 21:32:33.504225] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:10:42.890 [2024-07-15 21:32:33.504239] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:10:42.890 [2024-07-15 21:32:33.504255] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:42.890 [2024-07-15 21:32:33.512150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:42.890 [2024-07-15 21:32:33.512177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.890 [2024-07-15 21:32:33.512191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.890 [2024-07-15 21:32:33.512203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.890 [2024-07-15 21:32:33.512215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:42.890 [2024-07-15 21:32:33.512224] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:10:42.890 [2024-07-15 21:32:33.512239] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:42.890 [2024-07-15 21:32:33.512253] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:42.890 [2024-07-15 21:32:33.520149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:42.890 [2024-07-15 21:32:33.520165] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:10:42.890 [2024-07-15 21:32:33.520174] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:42.890 [2024-07-15 21:32:33.520189] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:10:42.890 [2024-07-15 21:32:33.520200] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:10:42.890 [2024-07-15 21:32:33.520213] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:42.890 [2024-07-15 21:32:33.528149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:42.890 [2024-07-15 21:32:33.528217] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:10:42.890 [2024-07-15 21:32:33.528232] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:10:42.890 [2024-07-15 21:32:33.528246] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:42.890 [2024-07-15 21:32:33.528254] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:42.890 [2024-07-15 21:32:33.528264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:42.890 [2024-07-15 21:32:33.536150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:42.890 [2024-07-15 21:32:33.536173] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:10:42.890 [2024-07-15 21:32:33.536194] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:10:42.890 [2024-07-15 21:32:33.536209] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:10:42.890 [2024-07-15 21:32:33.536221] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:42.890 [2024-07-15 21:32:33.536229] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:42.890 [2024-07-15 21:32:33.536239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:42.890 [2024-07-15 21:32:33.544146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:42.890 [2024-07-15 21:32:33.544173] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:42.890 [2024-07-15 21:32:33.544189] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:42.890 [2024-07-15 21:32:33.544202] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:42.890 [2024-07-15 21:32:33.544211] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:42.890 [2024-07-15 21:32:33.544220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:42.890 [2024-07-15 21:32:33.552150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:42.890 [2024-07-15 21:32:33.552172] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:42.890 [2024-07-15 21:32:33.552185] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:10:42.890 [2024-07-15 21:32:33.552202] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:10:42.890 [2024-07-15 21:32:33.552213] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:10:42.890 [2024-07-15 21:32:33.552221] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:42.890 [2024-07-15 21:32:33.552230] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:10:42.890 [2024-07-15 21:32:33.552239] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:10:42.890 [2024-07-15 21:32:33.552247] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:10:42.890 [2024-07-15 21:32:33.552256] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:10:42.890 [2024-07-15 21:32:33.552284] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:42.890 [2024-07-15 21:32:33.560148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:42.890 [2024-07-15 21:32:33.560173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:42.890 [2024-07-15 21:32:33.568148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:42.890 [2024-07-15 21:32:33.568171] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:42.890 [2024-07-15 21:32:33.576151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:42.890 [2024-07-15 21:32:33.576179] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:42.890 [2024-07-15 21:32:33.584149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:42.890 [2024-07-15 21:32:33.584188] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:42.890 [2024-07-15 21:32:33.584198] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:42.890 [2024-07-15 21:32:33.584205] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:42.890 [2024-07-15 21:32:33.584211] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:42.890 [2024-07-15 21:32:33.584220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:42.890 [2024-07-15 21:32:33.584236] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:42.890 [2024-07-15 21:32:33.584244] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:42.890 [2024-07-15 21:32:33.584253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:42.890 [2024-07-15 21:32:33.584264] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:42.890 [2024-07-15 21:32:33.584272] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:42.890 [2024-07-15 21:32:33.584281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:42.890 [2024-07-15 21:32:33.584294] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:42.890 [2024-07-15 21:32:33.584305] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:42.890 [2024-07-15 21:32:33.584314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:42.890 [2024-07-15 21:32:33.592151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:42.890 [2024-07-15 21:32:33.592178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:42.890 [2024-07-15 21:32:33.592195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:42.890 [2024-07-15 21:32:33.592206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:42.890 ===================================================== 00:10:42.890 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:42.890 ===================================================== 00:10:42.891 Controller Capabilities/Features 00:10:42.891 ================================ 00:10:42.891 Vendor ID: 4e58 00:10:42.891 Subsystem Vendor ID: 4e58 00:10:42.891 Serial Number: SPDK2 00:10:42.891 Model Number: SPDK bdev Controller 00:10:42.891 Firmware Version: 24.09 00:10:42.891 Recommended Arb Burst: 6 00:10:42.891 IEEE OUI Identifier: 8d 6b 50 00:10:42.891 Multi-path I/O 00:10:42.891 May have multiple subsystem ports: Yes 00:10:42.891 May have multiple controllers: Yes 00:10:42.891 Associated with SR-IOV VF: No 00:10:42.891 Max Data Transfer Size: 131072 00:10:42.891 Max Number of Namespaces: 32 00:10:42.891 Max Number of I/O Queues: 127 00:10:42.891 NVMe Specification Version (VS): 1.3 00:10:42.891 NVMe Specification Version (Identify): 1.3 00:10:42.891 Maximum Queue Entries: 256 00:10:42.891 Contiguous Queues Required: Yes 00:10:42.891 Arbitration Mechanisms Supported 00:10:42.891 Weighted Round Robin: Not Supported 00:10:42.891 Vendor Specific: Not Supported 00:10:42.891 Reset Timeout: 15000 ms 00:10:42.891 Doorbell Stride: 4 bytes 00:10:42.891 NVM Subsystem Reset: Not Supported 00:10:42.891 Command Sets Supported 00:10:42.891 NVM Command Set: Supported 00:10:42.891 Boot Partition: Not Supported 00:10:42.891 Memory Page Size Minimum: 4096 bytes 00:10:42.891 Memory Page Size Maximum: 4096 bytes 00:10:42.891 Persistent Memory Region: Not Supported 00:10:42.891 Optional Asynchronous Events Supported 00:10:42.891 Namespace Attribute Notices: Supported 00:10:42.891 Firmware Activation Notices: Not Supported 00:10:42.891 ANA Change Notices: Not Supported 00:10:42.891 PLE Aggregate Log Change Notices: Not Supported 00:10:42.891 LBA Status Info Alert Notices: Not Supported 00:10:42.891 EGE Aggregate Log Change Notices: Not Supported 00:10:42.891 Normal NVM Subsystem Shutdown event: Not Supported 00:10:42.891 Zone Descriptor Change Notices: Not Supported 00:10:42.891 Discovery Log Change Notices: Not Supported 00:10:42.891 Controller Attributes 00:10:42.891 128-bit Host Identifier: Supported 00:10:42.891 Non-Operational Permissive Mode: Not Supported 00:10:42.891 NVM Sets: Not Supported 00:10:42.891 Read Recovery Levels: Not Supported 00:10:42.891 Endurance Groups: Not Supported 00:10:42.891 Predictable Latency Mode: Not Supported 00:10:42.891 Traffic Based Keep ALive: Not Supported 00:10:42.891 Namespace Granularity: Not Supported 00:10:42.891 SQ Associations: Not Supported 00:10:42.891 UUID List: Not Supported 00:10:42.891 Multi-Domain Subsystem: Not Supported 00:10:42.891 Fixed Capacity Management: Not Supported 00:10:42.891 Variable Capacity Management: Not Supported 00:10:42.891 Delete Endurance Group: Not Supported 00:10:42.891 Delete NVM Set: Not Supported 00:10:42.891 Extended LBA Formats Supported: Not Supported 00:10:42.891 Flexible Data Placement Supported: Not Supported 00:10:42.891 00:10:42.891 Controller Memory Buffer Support 00:10:42.891 ================================ 00:10:42.891 Supported: No 00:10:42.891 00:10:42.891 Persistent Memory Region Support 00:10:42.891 ================================ 00:10:42.891 Supported: No 00:10:42.891 00:10:42.891 Admin Command Set Attributes 00:10:42.891 ============================ 00:10:42.891 Security Send/Receive: Not Supported 00:10:42.891 Format NVM: Not Supported 00:10:42.891 Firmware Activate/Download: Not Supported 00:10:42.891 Namespace Management: Not Supported 00:10:42.891 Device Self-Test: Not Supported 00:10:42.891 Directives: Not Supported 00:10:42.891 NVMe-MI: Not Supported 00:10:42.891 Virtualization Management: Not Supported 00:10:42.891 Doorbell Buffer Config: Not Supported 00:10:42.891 Get LBA Status Capability: Not Supported 00:10:42.891 Command & Feature Lockdown Capability: Not Supported 00:10:42.891 Abort Command Limit: 4 00:10:42.891 Async Event Request Limit: 4 00:10:42.891 Number of Firmware Slots: N/A 00:10:42.891 Firmware Slot 1 Read-Only: N/A 00:10:42.891 Firmware Activation Without Reset: N/A 00:10:42.891 Multiple Update Detection Support: N/A 00:10:42.891 Firmware Update Granularity: No Information Provided 00:10:42.891 Per-Namespace SMART Log: No 00:10:42.891 Asymmetric Namespace Access Log Page: Not Supported 00:10:42.891 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:10:42.891 Command Effects Log Page: Supported 00:10:42.891 Get Log Page Extended Data: Supported 00:10:42.891 Telemetry Log Pages: Not Supported 00:10:42.891 Persistent Event Log Pages: Not Supported 00:10:42.891 Supported Log Pages Log Page: May Support 00:10:42.891 Commands Supported & Effects Log Page: Not Supported 00:10:42.891 Feature Identifiers & Effects Log Page:May Support 00:10:42.891 NVMe-MI Commands & Effects Log Page: May Support 00:10:42.891 Data Area 4 for Telemetry Log: Not Supported 00:10:42.891 Error Log Page Entries Supported: 128 00:10:42.891 Keep Alive: Supported 00:10:42.891 Keep Alive Granularity: 10000 ms 00:10:42.891 00:10:42.891 NVM Command Set Attributes 00:10:42.891 ========================== 00:10:42.891 Submission Queue Entry Size 00:10:42.891 Max: 64 00:10:42.891 Min: 64 00:10:42.891 Completion Queue Entry Size 00:10:42.891 Max: 16 00:10:42.891 Min: 16 00:10:42.891 Number of Namespaces: 32 00:10:42.891 Compare Command: Supported 00:10:42.891 Write Uncorrectable Command: Not Supported 00:10:42.891 Dataset Management Command: Supported 00:10:42.891 Write Zeroes Command: Supported 00:10:42.891 Set Features Save Field: Not Supported 00:10:42.891 Reservations: Not Supported 00:10:42.891 Timestamp: Not Supported 00:10:42.891 Copy: Supported 00:10:42.891 Volatile Write Cache: Present 00:10:42.891 Atomic Write Unit (Normal): 1 00:10:42.891 Atomic Write Unit (PFail): 1 00:10:42.891 Atomic Compare & Write Unit: 1 00:10:42.891 Fused Compare & Write: Supported 00:10:42.891 Scatter-Gather List 00:10:42.891 SGL Command Set: Supported (Dword aligned) 00:10:42.891 SGL Keyed: Not Supported 00:10:42.891 SGL Bit Bucket Descriptor: Not Supported 00:10:42.891 SGL Metadata Pointer: Not Supported 00:10:42.891 Oversized SGL: Not Supported 00:10:42.891 SGL Metadata Address: Not Supported 00:10:42.891 SGL Offset: Not Supported 00:10:42.891 Transport SGL Data Block: Not Supported 00:10:42.891 Replay Protected Memory Block: Not Supported 00:10:42.891 00:10:42.891 Firmware Slot Information 00:10:42.891 ========================= 00:10:42.891 Active slot: 1 00:10:42.891 Slot 1 Firmware Revision: 24.09 00:10:42.891 00:10:42.891 00:10:42.891 Commands Supported and Effects 00:10:42.891 ============================== 00:10:42.891 Admin Commands 00:10:42.891 -------------- 00:10:42.891 Get Log Page (02h): Supported 00:10:42.891 Identify (06h): Supported 00:10:42.891 Abort (08h): Supported 00:10:42.891 Set Features (09h): Supported 00:10:42.891 Get Features (0Ah): Supported 00:10:42.891 Asynchronous Event Request (0Ch): Supported 00:10:42.891 Keep Alive (18h): Supported 00:10:42.891 I/O Commands 00:10:42.891 ------------ 00:10:42.891 Flush (00h): Supported LBA-Change 00:10:42.891 Write (01h): Supported LBA-Change 00:10:42.891 Read (02h): Supported 00:10:42.891 Compare (05h): Supported 00:10:42.891 Write Zeroes (08h): Supported LBA-Change 00:10:42.891 Dataset Management (09h): Supported LBA-Change 00:10:42.891 Copy (19h): Supported LBA-Change 00:10:42.891 00:10:42.891 Error Log 00:10:42.891 ========= 00:10:42.891 00:10:42.891 Arbitration 00:10:42.891 =========== 00:10:42.891 Arbitration Burst: 1 00:10:42.891 00:10:42.891 Power Management 00:10:42.891 ================ 00:10:42.891 Number of Power States: 1 00:10:42.891 Current Power State: Power State #0 00:10:42.891 Power State #0: 00:10:42.891 Max Power: 0.00 W 00:10:42.891 Non-Operational State: Operational 00:10:42.891 Entry Latency: Not Reported 00:10:42.891 Exit Latency: Not Reported 00:10:42.891 Relative Read Throughput: 0 00:10:42.891 Relative Read Latency: 0 00:10:42.891 Relative Write Throughput: 0 00:10:42.891 Relative Write Latency: 0 00:10:42.891 Idle Power: Not Reported 00:10:42.891 Active Power: Not Reported 00:10:42.891 Non-Operational Permissive Mode: Not Supported 00:10:42.891 00:10:42.891 Health Information 00:10:42.891 ================== 00:10:42.891 Critical Warnings: 00:10:42.891 Available Spare Space: OK 00:10:42.891 Temperature: OK 00:10:42.891 Device Reliability: OK 00:10:42.891 Read Only: No 00:10:42.891 Volatile Memory Backup: OK 00:10:42.891 Current Temperature: 0 Kelvin (-273 Celsius) 00:10:42.891 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:42.891 Available Spare: 0% 00:10:42.891 Available Sp[2024-07-15 21:32:33.592339] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:42.891 [2024-07-15 21:32:33.600149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:42.891 [2024-07-15 21:32:33.600211] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:10:42.891 [2024-07-15 21:32:33.600228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.891 [2024-07-15 21:32:33.600240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.891 [2024-07-15 21:32:33.600250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.892 [2024-07-15 21:32:33.600261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:42.892 [2024-07-15 21:32:33.600322] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:42.892 [2024-07-15 21:32:33.600342] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:10:42.892 [2024-07-15 21:32:33.601321] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:42.892 [2024-07-15 21:32:33.601390] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:10:42.892 [2024-07-15 21:32:33.601404] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:10:42.892 [2024-07-15 21:32:33.602333] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:10:42.892 [2024-07-15 21:32:33.602356] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:10:42.892 [2024-07-15 21:32:33.602418] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:10:42.892 [2024-07-15 21:32:33.603698] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:42.892 are Threshold: 0% 00:10:42.892 Life Percentage Used: 0% 00:10:42.892 Data Units Read: 0 00:10:42.892 Data Units Written: 0 00:10:42.892 Host Read Commands: 0 00:10:42.892 Host Write Commands: 0 00:10:42.892 Controller Busy Time: 0 minutes 00:10:42.892 Power Cycles: 0 00:10:42.892 Power On Hours: 0 hours 00:10:42.892 Unsafe Shutdowns: 0 00:10:42.892 Unrecoverable Media Errors: 0 00:10:42.892 Lifetime Error Log Entries: 0 00:10:42.892 Warning Temperature Time: 0 minutes 00:10:42.892 Critical Temperature Time: 0 minutes 00:10:42.892 00:10:42.892 Number of Queues 00:10:42.892 ================ 00:10:42.892 Number of I/O Submission Queues: 127 00:10:42.892 Number of I/O Completion Queues: 127 00:10:42.892 00:10:42.892 Active Namespaces 00:10:42.892 ================= 00:10:42.892 Namespace ID:1 00:10:42.892 Error Recovery Timeout: Unlimited 00:10:42.892 Command Set Identifier: NVM (00h) 00:10:42.892 Deallocate: Supported 00:10:42.892 Deallocated/Unwritten Error: Not Supported 00:10:42.892 Deallocated Read Value: Unknown 00:10:42.892 Deallocate in Write Zeroes: Not Supported 00:10:42.892 Deallocated Guard Field: 0xFFFF 00:10:42.892 Flush: Supported 00:10:42.892 Reservation: Supported 00:10:42.892 Namespace Sharing Capabilities: Multiple Controllers 00:10:42.892 Size (in LBAs): 131072 (0GiB) 00:10:42.892 Capacity (in LBAs): 131072 (0GiB) 00:10:42.892 Utilization (in LBAs): 131072 (0GiB) 00:10:42.892 NGUID: E3308DC0ADE34A0AAFF0403F8B1F9DFE 00:10:42.892 UUID: e3308dc0-ade3-4a0a-aff0-403f8b1f9dfe 00:10:42.892 Thin Provisioning: Not Supported 00:10:42.892 Per-NS Atomic Units: Yes 00:10:42.892 Atomic Boundary Size (Normal): 0 00:10:42.892 Atomic Boundary Size (PFail): 0 00:10:42.892 Atomic Boundary Offset: 0 00:10:42.892 Maximum Single Source Range Length: 65535 00:10:42.892 Maximum Copy Length: 65535 00:10:42.892 Maximum Source Range Count: 1 00:10:42.892 NGUID/EUI64 Never Reused: No 00:10:42.892 Namespace Write Protected: No 00:10:42.892 Number of LBA Formats: 1 00:10:42.892 Current LBA Format: LBA Format #00 00:10:42.892 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:42.892 00:10:42.892 21:32:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:43.148 EAL: No free 2048 kB hugepages reported on node 1 00:10:43.148 [2024-07-15 21:32:33.816608] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:48.409 Initializing NVMe Controllers 00:10:48.409 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:48.409 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:48.409 Initialization complete. Launching workers. 00:10:48.409 ======================================================== 00:10:48.409 Latency(us) 00:10:48.409 Device Information : IOPS MiB/s Average min max 00:10:48.409 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31843.14 124.39 4018.84 1251.62 13344.61 00:10:48.409 ======================================================== 00:10:48.409 Total : 31843.14 124.39 4018.84 1251.62 13344.61 00:10:48.409 00:10:48.409 [2024-07-15 21:32:38.922434] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:48.409 21:32:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:48.409 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.409 [2024-07-15 21:32:39.143008] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:53.798 Initializing NVMe Controllers 00:10:53.798 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:53.798 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:53.798 Initialization complete. Launching workers. 00:10:53.798 ======================================================== 00:10:53.798 Latency(us) 00:10:53.798 Device Information : IOPS MiB/s Average min max 00:10:53.798 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31510.64 123.09 4061.24 1246.42 8257.16 00:10:53.798 ======================================================== 00:10:53.798 Total : 31510.64 123.09 4061.24 1246.42 8257.16 00:10:53.798 00:10:53.798 [2024-07-15 21:32:44.166173] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:53.798 21:32:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:53.798 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.798 [2024-07-15 21:32:44.377736] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:59.060 [2024-07-15 21:32:49.518283] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:59.060 Initializing NVMe Controllers 00:10:59.060 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:59.060 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:59.060 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:59.060 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:59.060 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:59.060 Initialization complete. Launching workers. 00:10:59.060 Starting thread on core 2 00:10:59.060 Starting thread on core 3 00:10:59.060 Starting thread on core 1 00:10:59.060 21:32:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:59.060 EAL: No free 2048 kB hugepages reported on node 1 00:10:59.060 [2024-07-15 21:32:49.792448] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:02.342 [2024-07-15 21:32:52.870516] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:02.342 Initializing NVMe Controllers 00:11:02.342 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:02.342 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:02.342 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:11:02.342 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:11:02.342 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:11:02.342 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:11:02.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:02.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:02.342 Initialization complete. Launching workers. 00:11:02.342 Starting thread on core 1 with urgent priority queue 00:11:02.342 Starting thread on core 2 with urgent priority queue 00:11:02.342 Starting thread on core 3 with urgent priority queue 00:11:02.342 Starting thread on core 0 with urgent priority queue 00:11:02.342 SPDK bdev Controller (SPDK2 ) core 0: 6824.00 IO/s 14.65 secs/100000 ios 00:11:02.342 SPDK bdev Controller (SPDK2 ) core 1: 6268.33 IO/s 15.95 secs/100000 ios 00:11:02.342 SPDK bdev Controller (SPDK2 ) core 2: 8442.00 IO/s 11.85 secs/100000 ios 00:11:02.342 SPDK bdev Controller (SPDK2 ) core 3: 6103.33 IO/s 16.38 secs/100000 ios 00:11:02.342 ======================================================== 00:11:02.342 00:11:02.342 21:32:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:02.342 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.606 [2024-07-15 21:32:53.144643] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:02.607 Initializing NVMe Controllers 00:11:02.607 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:02.607 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:02.607 Namespace ID: 1 size: 0GB 00:11:02.607 Initialization complete. 00:11:02.607 INFO: using host memory buffer for IO 00:11:02.607 Hello world! 00:11:02.607 [2024-07-15 21:32:53.153700] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:02.607 21:32:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:02.607 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.865 [2024-07-15 21:32:53.418635] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:03.799 Initializing NVMe Controllers 00:11:03.799 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:03.799 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:03.799 Initialization complete. Launching workers. 00:11:03.799 submit (in ns) avg, min, max = 7099.3, 3783.3, 4007165.6 00:11:03.799 complete (in ns) avg, min, max = 25037.1, 2251.1, 4013906.7 00:11:03.799 00:11:03.799 Submit histogram 00:11:03.799 ================ 00:11:03.799 Range in us Cumulative Count 00:11:03.799 3.769 - 3.793: 0.0576% ( 8) 00:11:03.799 3.793 - 3.816: 4.6646% ( 640) 00:11:03.799 3.816 - 3.840: 16.5347% ( 1649) 00:11:03.799 3.840 - 3.864: 39.7783% ( 3229) 00:11:03.799 3.864 - 3.887: 60.4377% ( 2870) 00:11:03.799 3.887 - 3.911: 74.3449% ( 1932) 00:11:03.799 3.911 - 3.935: 83.8036% ( 1314) 00:11:03.799 3.935 - 3.959: 88.0579% ( 591) 00:11:03.799 3.959 - 3.982: 89.1880% ( 157) 00:11:03.799 3.982 - 4.006: 90.1598% ( 135) 00:11:03.799 4.006 - 4.030: 92.0818% ( 267) 00:11:03.799 4.030 - 4.053: 94.2989% ( 308) 00:11:03.799 4.053 - 4.077: 96.0913% ( 249) 00:11:03.799 4.077 - 4.101: 97.0847% ( 138) 00:11:03.799 4.101 - 4.124: 97.6677% ( 81) 00:11:03.799 4.124 - 4.148: 97.8909% ( 31) 00:11:03.799 4.148 - 4.172: 98.0060% ( 16) 00:11:03.799 4.172 - 4.196: 98.0708% ( 9) 00:11:03.799 4.196 - 4.219: 98.1212% ( 7) 00:11:03.799 4.219 - 4.243: 98.1500% ( 4) 00:11:03.799 4.243 - 4.267: 98.1644% ( 2) 00:11:03.799 4.267 - 4.290: 98.2076% ( 6) 00:11:03.799 4.290 - 4.314: 98.2364% ( 4) 00:11:03.799 4.314 - 4.338: 98.2580% ( 3) 00:11:03.799 4.338 - 4.361: 98.3228% ( 9) 00:11:03.799 4.361 - 4.385: 98.3948% ( 10) 00:11:03.799 4.385 - 4.409: 98.4595% ( 9) 00:11:03.799 4.409 - 4.433: 98.5315% ( 10) 00:11:03.799 4.433 - 4.456: 98.6251% ( 13) 00:11:03.799 4.456 - 4.480: 98.7547% ( 18) 00:11:03.799 4.480 - 4.504: 98.8051% ( 7) 00:11:03.799 4.504 - 4.527: 98.8339% ( 4) 00:11:03.799 4.527 - 4.551: 98.8555% ( 3) 00:11:03.799 4.551 - 4.575: 98.8771% ( 3) 00:11:03.799 4.575 - 4.599: 98.8914% ( 2) 00:11:03.799 4.599 - 4.622: 98.9130% ( 3) 00:11:03.799 4.622 - 4.646: 98.9418% ( 4) 00:11:03.799 4.646 - 4.670: 98.9562% ( 2) 00:11:03.799 4.693 - 4.717: 98.9778% ( 3) 00:11:03.799 4.717 - 4.741: 98.9850% ( 1) 00:11:03.799 4.741 - 4.764: 98.9994% ( 2) 00:11:03.799 4.764 - 4.788: 99.0210% ( 3) 00:11:03.799 4.788 - 4.812: 99.0426% ( 3) 00:11:03.799 4.812 - 4.836: 99.0570% ( 2) 00:11:03.799 4.836 - 4.859: 99.0714% ( 2) 00:11:03.799 4.883 - 4.907: 99.0858% ( 2) 00:11:03.799 4.930 - 4.954: 99.1074% ( 3) 00:11:03.799 4.954 - 4.978: 99.1146% ( 1) 00:11:03.799 4.978 - 5.001: 99.1218% ( 1) 00:11:03.799 5.096 - 5.120: 99.1362% ( 2) 00:11:03.799 5.215 - 5.239: 99.1506% ( 2) 00:11:03.799 5.262 - 5.286: 99.1578% ( 1) 00:11:03.799 5.286 - 5.310: 99.1650% ( 1) 00:11:03.799 5.547 - 5.570: 99.1722% ( 1) 00:11:03.799 5.665 - 5.689: 99.1794% ( 1) 00:11:03.799 5.807 - 5.831: 99.1866% ( 1) 00:11:03.799 5.950 - 5.973: 99.1938% ( 1) 00:11:03.799 5.973 - 5.997: 99.2010% ( 1) 00:11:03.799 6.068 - 6.116: 99.2082% ( 1) 00:11:03.799 6.353 - 6.400: 99.2154% ( 1) 00:11:03.799 6.400 - 6.447: 99.2298% ( 2) 00:11:03.799 6.447 - 6.495: 99.2442% ( 2) 00:11:03.799 6.495 - 6.542: 99.2514% ( 1) 00:11:03.799 6.542 - 6.590: 99.2586% ( 1) 00:11:03.799 6.590 - 6.637: 99.2658% ( 1) 00:11:03.799 6.637 - 6.684: 99.2730% ( 1) 00:11:03.799 6.684 - 6.732: 99.2802% ( 1) 00:11:03.799 6.732 - 6.779: 99.2874% ( 1) 00:11:03.799 6.874 - 6.921: 99.2946% ( 1) 00:11:03.799 6.921 - 6.969: 99.3018% ( 1) 00:11:03.799 6.969 - 7.016: 99.3090% ( 1) 00:11:03.799 7.016 - 7.064: 99.3162% ( 1) 00:11:03.799 7.111 - 7.159: 99.3305% ( 2) 00:11:03.799 7.253 - 7.301: 99.3593% ( 4) 00:11:03.799 7.443 - 7.490: 99.3737% ( 2) 00:11:03.799 7.490 - 7.538: 99.3809% ( 1) 00:11:03.799 7.538 - 7.585: 99.3953% ( 2) 00:11:03.799 7.585 - 7.633: 99.4169% ( 3) 00:11:03.799 7.680 - 7.727: 99.4241% ( 1) 00:11:03.799 7.727 - 7.775: 99.4529% ( 4) 00:11:03.799 7.775 - 7.822: 99.4673% ( 2) 00:11:03.799 7.822 - 7.870: 99.4889% ( 3) 00:11:03.799 7.870 - 7.917: 99.5033% ( 2) 00:11:03.799 7.917 - 7.964: 99.5177% ( 2) 00:11:03.799 7.964 - 8.012: 99.5249% ( 1) 00:11:03.799 8.012 - 8.059: 99.5321% ( 1) 00:11:03.799 8.107 - 8.154: 99.5465% ( 2) 00:11:03.799 8.154 - 8.201: 99.5609% ( 2) 00:11:03.799 8.344 - 8.391: 99.5897% ( 4) 00:11:03.799 8.439 - 8.486: 99.5969% ( 1) 00:11:03.799 8.533 - 8.581: 99.6113% ( 2) 00:11:03.799 8.581 - 8.628: 99.6185% ( 1) 00:11:03.799 8.628 - 8.676: 99.6257% ( 1) 00:11:03.799 8.676 - 8.723: 99.6401% ( 2) 00:11:03.799 8.723 - 8.770: 99.6545% ( 2) 00:11:03.799 9.055 - 9.102: 99.6617% ( 1) 00:11:03.799 9.102 - 9.150: 99.6689% ( 1) 00:11:03.799 9.197 - 9.244: 99.6761% ( 1) 00:11:03.799 9.292 - 9.339: 99.6833% ( 1) 00:11:03.799 9.387 - 9.434: 99.6977% ( 2) 00:11:03.799 9.481 - 9.529: 99.7049% ( 1) 00:11:03.799 9.529 - 9.576: 99.7121% ( 1) 00:11:03.799 9.671 - 9.719: 99.7265% ( 2) 00:11:03.799 9.908 - 9.956: 99.7409% ( 2) 00:11:03.799 9.956 - 10.003: 99.7481% ( 1) 00:11:03.799 10.050 - 10.098: 99.7553% ( 1) 00:11:03.799 10.098 - 10.145: 99.7625% ( 1) 00:11:03.799 10.335 - 10.382: 99.7697% ( 1) 00:11:03.799 10.430 - 10.477: 99.7768% ( 1) 00:11:03.799 10.619 - 10.667: 99.7840% ( 1) 00:11:03.799 10.761 - 10.809: 99.7912% ( 1) 00:11:03.799 10.951 - 10.999: 99.7984% ( 1) 00:11:03.799 11.710 - 11.757: 99.8056% ( 1) 00:11:03.799 11.757 - 11.804: 99.8128% ( 1) 00:11:03.799 11.804 - 11.852: 99.8200% ( 1) 00:11:03.799 11.852 - 11.899: 99.8272% ( 1) 00:11:03.799 11.994 - 12.041: 99.8344% ( 1) 00:11:03.799 12.089 - 12.136: 99.8416% ( 1) 00:11:03.799 12.421 - 12.516: 99.8560% ( 2) 00:11:03.799 12.516 - 12.610: 99.8704% ( 2) 00:11:03.799 12.705 - 12.800: 99.8776% ( 1) 00:11:03.799 14.412 - 14.507: 99.8848% ( 1) 00:11:03.799 15.455 - 15.550: 99.8920% ( 1) 00:11:03.799 15.550 - 15.644: 99.8992% ( 1) 00:11:03.799 15.644 - 15.739: 99.9136% ( 2) 00:11:03.799 15.929 - 16.024: 99.9208% ( 1) 00:11:03.799 3980.705 - 4004.978: 99.9784% ( 8) 00:11:03.799 4004.978 - 4029.250: 100.0000% ( 3) 00:11:03.799 00:11:03.799 Complete histogram 00:11:03.799 ================== 00:11:03.799 Range in us Cumulative Count 00:11:03.799 2.240 - 2.252: 0.0072% ( 1) 00:11:03.799 2.252 - 2.264: 11.0423% ( 1533) 00:11:03.799 2.264 - 2.276: 44.4141% ( 4636) 00:11:03.799 2.276 - 2.287: 51.6772% ( 1009) 00:11:03.799 2.287 - 2.299: 72.5022% ( 2893) 00:11:03.799 2.299 - 2.311: 89.1592% ( 2314) 00:11:03.799 2.311 - 2.323: 94.7740% ( 780) 00:11:03.799 2.323 - 2.335: 96.4296% ( 230) 00:11:03.799 2.335 - 2.347: 97.0487% ( 86) 00:11:03.799 2.347 - 2.359: 97.6965% ( 90) 00:11:03.799 2.359 - 2.370: 98.1068% ( 57) 00:11:03.799 2.370 - 2.382: 98.3012% ( 27) 00:11:03.799 2.382 - 2.394: 98.4308% ( 18) 00:11:03.799 2.394 - 2.406: 98.5171% ( 12) 00:11:03.799 2.406 - 2.418: 98.5819% ( 9) 00:11:03.799 2.418 - 2.430: 98.6827% ( 14) 00:11:03.799 2.430 - 2.441: 98.7043% ( 3) 00:11:03.799 2.441 - 2.453: 98.7259% ( 3) 00:11:03.799 2.453 - 2.465: 98.7403% ( 2) 00:11:03.799 2.465 - 2.477: 98.7907% ( 7) 00:11:03.799 2.477 - 2.489: 98.8051% ( 2) 00:11:03.799 2.489 - 2.501: 98.8483% ( 6) 00:11:03.799 2.501 - 2.513: 98.9130% ( 9) 00:11:03.799 2.513 - 2.524: 98.9202% ( 1) 00:11:03.799 2.524 - 2.536: 98.9346% ( 2) 00:11:03.799 2.536 - 2.548: 98.9490% ( 2) 00:11:03.799 2.596 - 2.607: 98.9562% ( 1) 00:11:03.799 2.631 - 2.643: 98.9634% ( 1) 00:11:03.799 2.643 - 2.655: 98.9706% ( 1) 00:11:03.799 2.714 - 2.726: 98.9778% ( 1) 00:11:03.799 2.726 - 2.738: 98.9850% ( 1) 00:11:03.799 2.750 - 2.761: 99.0066% ( 3) 00:11:03.799 2.761 - 2.773: 99.0138% ( 1) 00:11:03.799 2.809 - 2.821: 99.0210% ( 1) 00:11:03.799 2.833 - 2.844: 99.0282% ( 1) 00:11:03.799 2.844 - 2.856: 99.0354% ( 1) 00:11:03.799 2.856 - 2.8[2024-07-15 21:32:54.512699] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:03.799 68: 99.0570% ( 3) 00:11:03.799 2.868 - 2.880: 99.0642% ( 1) 00:11:03.799 2.880 - 2.892: 99.0714% ( 1) 00:11:03.799 2.892 - 2.904: 99.0786% ( 1) 00:11:03.799 2.904 - 2.916: 99.0930% ( 2) 00:11:03.799 2.916 - 2.927: 99.1002% ( 1) 00:11:03.799 2.939 - 2.951: 99.1074% ( 1) 00:11:03.799 2.963 - 2.975: 99.1146% ( 1) 00:11:03.800 2.987 - 2.999: 99.1218% ( 1) 00:11:03.800 3.010 - 3.022: 99.1290% ( 1) 00:11:03.800 3.022 - 3.034: 99.1434% ( 2) 00:11:03.800 3.058 - 3.081: 99.1578% ( 2) 00:11:03.800 3.081 - 3.105: 99.1650% ( 1) 00:11:03.800 3.105 - 3.129: 99.1794% ( 2) 00:11:03.800 3.129 - 3.153: 99.1938% ( 2) 00:11:03.800 3.153 - 3.176: 99.2154% ( 3) 00:11:03.800 3.556 - 3.579: 99.2226% ( 1) 00:11:03.800 3.674 - 3.698: 99.2298% ( 1) 00:11:03.800 3.698 - 3.721: 99.2370% ( 1) 00:11:03.800 4.006 - 4.030: 99.2442% ( 1) 00:11:03.800 4.219 - 4.243: 99.2514% ( 1) 00:11:03.800 4.243 - 4.267: 99.2586% ( 1) 00:11:03.800 4.456 - 4.480: 99.2658% ( 1) 00:11:03.800 5.381 - 5.404: 99.2802% ( 2) 00:11:03.800 5.641 - 5.665: 99.2874% ( 1) 00:11:03.800 5.665 - 5.689: 99.2946% ( 1) 00:11:03.800 5.689 - 5.713: 99.3018% ( 1) 00:11:03.800 5.713 - 5.736: 99.3090% ( 1) 00:11:03.800 5.736 - 5.760: 99.3162% ( 1) 00:11:03.800 5.807 - 5.831: 99.3234% ( 1) 00:11:03.800 6.116 - 6.163: 99.3305% ( 1) 00:11:03.800 6.258 - 6.305: 99.3377% ( 1) 00:11:03.800 6.684 - 6.732: 99.3449% ( 1) 00:11:03.800 6.779 - 6.827: 99.3593% ( 2) 00:11:03.800 6.874 - 6.921: 99.3737% ( 2) 00:11:03.800 6.921 - 6.969: 99.3809% ( 1) 00:11:03.800 7.206 - 7.253: 99.3881% ( 1) 00:11:03.800 7.633 - 7.680: 99.3953% ( 1) 00:11:03.800 8.581 - 8.628: 99.4025% ( 1) 00:11:03.800 13.748 - 13.843: 99.4097% ( 1) 00:11:03.800 14.317 - 14.412: 99.4169% ( 1) 00:11:03.800 15.550 - 15.644: 99.4241% ( 1) 00:11:03.800 17.351 - 17.446: 99.4313% ( 1) 00:11:03.800 3980.705 - 4004.978: 99.8992% ( 65) 00:11:03.800 4004.978 - 4029.250: 100.0000% ( 14) 00:11:03.800 00:11:03.800 21:32:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:11:03.800 21:32:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:03.800 21:32:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:11:03.800 21:32:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:11:03.800 21:32:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:04.058 [ 00:11:04.058 { 00:11:04.058 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:04.058 "subtype": "Discovery", 00:11:04.058 "listen_addresses": [], 00:11:04.058 "allow_any_host": true, 00:11:04.058 "hosts": [] 00:11:04.058 }, 00:11:04.058 { 00:11:04.058 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:04.058 "subtype": "NVMe", 00:11:04.058 "listen_addresses": [ 00:11:04.058 { 00:11:04.058 "trtype": "VFIOUSER", 00:11:04.058 "adrfam": "IPv4", 00:11:04.058 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:04.058 "trsvcid": "0" 00:11:04.058 } 00:11:04.058 ], 00:11:04.058 "allow_any_host": true, 00:11:04.058 "hosts": [], 00:11:04.058 "serial_number": "SPDK1", 00:11:04.058 "model_number": "SPDK bdev Controller", 00:11:04.058 "max_namespaces": 32, 00:11:04.058 "min_cntlid": 1, 00:11:04.058 "max_cntlid": 65519, 00:11:04.058 "namespaces": [ 00:11:04.058 { 00:11:04.058 "nsid": 1, 00:11:04.058 "bdev_name": "Malloc1", 00:11:04.058 "name": "Malloc1", 00:11:04.058 "nguid": "038FD1520437432F9CAEC0B56D15FD75", 00:11:04.058 "uuid": "038fd152-0437-432f-9cae-c0b56d15fd75" 00:11:04.058 }, 00:11:04.058 { 00:11:04.058 "nsid": 2, 00:11:04.058 "bdev_name": "Malloc3", 00:11:04.058 "name": "Malloc3", 00:11:04.058 "nguid": "6679B6AA5AE9434492CF25896C819FE7", 00:11:04.058 "uuid": "6679b6aa-5ae9-4344-92cf-25896c819fe7" 00:11:04.058 } 00:11:04.058 ] 00:11:04.058 }, 00:11:04.058 { 00:11:04.058 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:04.058 "subtype": "NVMe", 00:11:04.058 "listen_addresses": [ 00:11:04.058 { 00:11:04.058 "trtype": "VFIOUSER", 00:11:04.058 "adrfam": "IPv4", 00:11:04.058 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:04.058 "trsvcid": "0" 00:11:04.058 } 00:11:04.058 ], 00:11:04.058 "allow_any_host": true, 00:11:04.058 "hosts": [], 00:11:04.058 "serial_number": "SPDK2", 00:11:04.058 "model_number": "SPDK bdev Controller", 00:11:04.058 "max_namespaces": 32, 00:11:04.058 "min_cntlid": 1, 00:11:04.058 "max_cntlid": 65519, 00:11:04.058 "namespaces": [ 00:11:04.058 { 00:11:04.058 "nsid": 1, 00:11:04.058 "bdev_name": "Malloc2", 00:11:04.058 "name": "Malloc2", 00:11:04.058 "nguid": "E3308DC0ADE34A0AAFF0403F8B1F9DFE", 00:11:04.058 "uuid": "e3308dc0-ade3-4a0a-aff0-403f8b1f9dfe" 00:11:04.058 } 00:11:04.058 ] 00:11:04.058 } 00:11:04.058 ] 00:11:04.316 21:32:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:04.316 21:32:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=313573 00:11:04.316 21:32:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:11:04.316 21:32:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:04.316 21:32:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:11:04.316 21:32:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:04.316 21:32:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:11:04.316 21:32:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=1 00:11:04.316 21:32:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:11:04.316 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.316 21:32:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:04.316 21:32:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:11:04.316 21:32:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # i=2 00:11:04.316 21:32:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # sleep 0.1 00:11:04.316 [2024-07-15 21:32:55.007036] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:04.316 21:32:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:04.316 21:32:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:04.316 21:32:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:11:04.316 21:32:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:04.316 21:32:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:11:04.882 Malloc4 00:11:04.882 21:32:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:11:05.154 [2024-07-15 21:32:55.680587] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:05.154 21:32:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:05.154 Asynchronous Event Request test 00:11:05.154 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:05.154 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:05.154 Registering asynchronous event callbacks... 00:11:05.154 Starting namespace attribute notice tests for all controllers... 00:11:05.154 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:05.154 aer_cb - Changed Namespace 00:11:05.154 Cleaning up... 00:11:05.412 [ 00:11:05.412 { 00:11:05.412 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:05.412 "subtype": "Discovery", 00:11:05.412 "listen_addresses": [], 00:11:05.412 "allow_any_host": true, 00:11:05.412 "hosts": [] 00:11:05.412 }, 00:11:05.412 { 00:11:05.412 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:05.412 "subtype": "NVMe", 00:11:05.412 "listen_addresses": [ 00:11:05.412 { 00:11:05.412 "trtype": "VFIOUSER", 00:11:05.412 "adrfam": "IPv4", 00:11:05.412 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:05.412 "trsvcid": "0" 00:11:05.412 } 00:11:05.412 ], 00:11:05.412 "allow_any_host": true, 00:11:05.412 "hosts": [], 00:11:05.412 "serial_number": "SPDK1", 00:11:05.412 "model_number": "SPDK bdev Controller", 00:11:05.412 "max_namespaces": 32, 00:11:05.412 "min_cntlid": 1, 00:11:05.412 "max_cntlid": 65519, 00:11:05.412 "namespaces": [ 00:11:05.412 { 00:11:05.412 "nsid": 1, 00:11:05.412 "bdev_name": "Malloc1", 00:11:05.412 "name": "Malloc1", 00:11:05.412 "nguid": "038FD1520437432F9CAEC0B56D15FD75", 00:11:05.412 "uuid": "038fd152-0437-432f-9cae-c0b56d15fd75" 00:11:05.412 }, 00:11:05.412 { 00:11:05.412 "nsid": 2, 00:11:05.412 "bdev_name": "Malloc3", 00:11:05.412 "name": "Malloc3", 00:11:05.412 "nguid": "6679B6AA5AE9434492CF25896C819FE7", 00:11:05.412 "uuid": "6679b6aa-5ae9-4344-92cf-25896c819fe7" 00:11:05.412 } 00:11:05.412 ] 00:11:05.412 }, 00:11:05.412 { 00:11:05.412 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:05.412 "subtype": "NVMe", 00:11:05.412 "listen_addresses": [ 00:11:05.412 { 00:11:05.412 "trtype": "VFIOUSER", 00:11:05.412 "adrfam": "IPv4", 00:11:05.412 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:05.412 "trsvcid": "0" 00:11:05.412 } 00:11:05.412 ], 00:11:05.412 "allow_any_host": true, 00:11:05.412 "hosts": [], 00:11:05.412 "serial_number": "SPDK2", 00:11:05.412 "model_number": "SPDK bdev Controller", 00:11:05.412 "max_namespaces": 32, 00:11:05.412 "min_cntlid": 1, 00:11:05.412 "max_cntlid": 65519, 00:11:05.412 "namespaces": [ 00:11:05.412 { 00:11:05.412 "nsid": 1, 00:11:05.412 "bdev_name": "Malloc2", 00:11:05.412 "name": "Malloc2", 00:11:05.412 "nguid": "E3308DC0ADE34A0AAFF0403F8B1F9DFE", 00:11:05.412 "uuid": "e3308dc0-ade3-4a0a-aff0-403f8b1f9dfe" 00:11:05.412 }, 00:11:05.412 { 00:11:05.412 "nsid": 2, 00:11:05.412 "bdev_name": "Malloc4", 00:11:05.412 "name": "Malloc4", 00:11:05.412 "nguid": "EAC39C025F9B44AE8C8B4AED8A939933", 00:11:05.412 "uuid": "eac39c02-5f9b-44ae-8c8b-4aed8a939933" 00:11:05.412 } 00:11:05.412 ] 00:11:05.412 } 00:11:05.412 ] 00:11:05.412 21:32:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 313573 00:11:05.412 21:32:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:11:05.413 21:32:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 309206 00:11:05.413 21:32:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 309206 ']' 00:11:05.413 21:32:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 309206 00:11:05.413 21:32:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:11:05.413 21:32:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:05.413 21:32:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 309206 00:11:05.413 21:32:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:05.413 21:32:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:05.413 21:32:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 309206' 00:11:05.413 killing process with pid 309206 00:11:05.413 21:32:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 309206 00:11:05.413 21:32:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 309206 00:11:05.671 21:32:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:05.671 21:32:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:05.671 21:32:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:11:05.671 21:32:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:11:05.671 21:32:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:11:05.671 21:32:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=313687 00:11:05.671 21:32:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:11:05.671 21:32:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 313687' 00:11:05.671 Process pid: 313687 00:11:05.671 21:32:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:05.671 21:32:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 313687 00:11:05.671 21:32:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 313687 ']' 00:11:05.671 21:32:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.671 21:32:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:05.671 21:32:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.671 21:32:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:05.671 21:32:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:05.671 [2024-07-15 21:32:56.296305] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:11:05.671 [2024-07-15 21:32:56.297307] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:11:05.671 [2024-07-15 21:32:56.297370] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.671 EAL: No free 2048 kB hugepages reported on node 1 00:11:05.671 [2024-07-15 21:32:56.347510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:05.671 [2024-07-15 21:32:56.447844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.671 [2024-07-15 21:32:56.447890] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.671 [2024-07-15 21:32:56.447915] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:05.671 [2024-07-15 21:32:56.447927] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:05.671 [2024-07-15 21:32:56.447937] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.671 [2024-07-15 21:32:56.447984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.671 [2024-07-15 21:32:56.448014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:05.671 [2024-07-15 21:32:56.448062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:05.671 [2024-07-15 21:32:56.448064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.929 [2024-07-15 21:32:56.531193] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:11:05.929 [2024-07-15 21:32:56.531385] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:11:05.929 [2024-07-15 21:32:56.531618] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:11:05.929 [2024-07-15 21:32:56.532097] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:11:05.929 [2024-07-15 21:32:56.532321] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:11:05.929 21:32:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:05.929 21:32:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:11:05.930 21:32:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:06.863 21:32:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:11:07.121 21:32:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:07.121 21:32:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:07.121 21:32:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:07.121 21:32:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:07.121 21:32:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:07.378 Malloc1 00:11:07.378 21:32:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:07.638 21:32:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:07.896 21:32:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:08.154 21:32:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:08.154 21:32:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:08.154 21:32:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:08.719 Malloc2 00:11:08.719 21:32:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:08.976 21:32:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:09.233 21:32:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:09.508 21:33:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:11:09.508 21:33:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 313687 00:11:09.508 21:33:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 313687 ']' 00:11:09.508 21:33:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 313687 00:11:09.508 21:33:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:11:09.508 21:33:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:09.508 21:33:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 313687 00:11:09.508 21:33:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:09.508 21:33:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:09.508 21:33:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 313687' 00:11:09.508 killing process with pid 313687 00:11:09.508 21:33:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 313687 00:11:09.508 21:33:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 313687 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:09.767 00:11:09.767 real 0m52.973s 00:11:09.767 user 3m29.806s 00:11:09.767 sys 0m4.301s 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:09.767 ************************************ 00:11:09.767 END TEST nvmf_vfio_user 00:11:09.767 ************************************ 00:11:09.767 21:33:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:09.767 21:33:00 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:09.767 21:33:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:09.767 21:33:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.767 21:33:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:09.767 ************************************ 00:11:09.767 START TEST nvmf_vfio_user_nvme_compliance 00:11:09.767 ************************************ 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:09.767 * Looking for test storage... 00:11:09.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.767 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=314192 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 314192' 00:11:09.768 Process pid: 314192 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 314192 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 314192 ']' 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:09.768 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:09.768 [2024-07-15 21:33:00.540749] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:11:09.768 [2024-07-15 21:33:00.540829] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.025 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.025 [2024-07-15 21:33:00.594615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:10.025 [2024-07-15 21:33:00.693505] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.025 [2024-07-15 21:33:00.693552] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.025 [2024-07-15 21:33:00.693578] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.025 [2024-07-15 21:33:00.693591] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.025 [2024-07-15 21:33:00.693601] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.025 [2024-07-15 21:33:00.693650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.025 [2024-07-15 21:33:00.693732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.025 [2024-07-15 21:33:00.693739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.025 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:10.025 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:11:10.025 21:33:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:11:11.395 21:33:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:11.395 21:33:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:11:11.395 21:33:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:11.395 21:33:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.395 21:33:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:11.395 21:33:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.395 21:33:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:11:11.395 21:33:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:11.395 21:33:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.395 21:33:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:11.395 malloc0 00:11:11.395 21:33:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.395 21:33:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:11:11.395 21:33:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.395 21:33:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:11.395 21:33:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.395 21:33:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:11.395 21:33:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.395 21:33:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:11.395 21:33:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.395 21:33:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:11.395 21:33:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.395 21:33:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:11.395 21:33:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.395 21:33:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:11:11.395 EAL: No free 2048 kB hugepages reported on node 1 00:11:11.395 00:11:11.395 00:11:11.395 CUnit - A unit testing framework for C - Version 2.1-3 00:11:11.395 http://cunit.sourceforge.net/ 00:11:11.395 00:11:11.395 00:11:11.395 Suite: nvme_compliance 00:11:11.395 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 21:33:02.025424] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:11.395 [2024-07-15 21:33:02.026877] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:11:11.395 [2024-07-15 21:33:02.026899] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:11:11.395 [2024-07-15 21:33:02.026911] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:11:11.395 [2024-07-15 21:33:02.028454] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:11.395 passed 00:11:11.395 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 21:33:02.127045] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:11.395 [2024-07-15 21:33:02.130070] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:11.395 passed 00:11:11.652 Test: admin_identify_ns ...[2024-07-15 21:33:02.225585] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:11.652 [2024-07-15 21:33:02.285167] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:11.652 [2024-07-15 21:33:02.293154] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:11:11.652 [2024-07-15 21:33:02.314284] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:11.652 passed 00:11:11.652 Test: admin_get_features_mandatory_features ...[2024-07-15 21:33:02.409289] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:11.652 [2024-07-15 21:33:02.412308] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:11.908 passed 00:11:11.908 Test: admin_get_features_optional_features ...[2024-07-15 21:33:02.501835] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:11.908 [2024-07-15 21:33:02.504858] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:11.908 passed 00:11:11.908 Test: admin_set_features_number_of_queues ...[2024-07-15 21:33:02.593045] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:11.908 [2024-07-15 21:33:02.699283] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:12.164 passed 00:11:12.164 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 21:33:02.786416] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:12.164 [2024-07-15 21:33:02.789436] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:12.164 passed 00:11:12.164 Test: admin_get_log_page_with_lpo ...[2024-07-15 21:33:02.877542] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:12.164 [2024-07-15 21:33:02.944164] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:11:12.164 [2024-07-15 21:33:02.957234] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:12.420 passed 00:11:12.420 Test: fabric_property_get ...[2024-07-15 21:33:03.045494] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:12.420 [2024-07-15 21:33:03.046776] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:11:12.420 [2024-07-15 21:33:03.048511] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:12.420 passed 00:11:12.420 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 21:33:03.138073] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:12.420 [2024-07-15 21:33:03.139349] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:11:12.420 [2024-07-15 21:33:03.141098] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:12.420 passed 00:11:12.679 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 21:33:03.229600] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:12.679 [2024-07-15 21:33:03.317149] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:12.679 [2024-07-15 21:33:03.333145] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:12.679 [2024-07-15 21:33:03.338256] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:12.679 passed 00:11:12.679 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 21:33:03.426401] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:12.679 [2024-07-15 21:33:03.427690] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:11:12.679 [2024-07-15 21:33:03.429425] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:12.679 passed 00:11:12.935 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 21:33:03.517569] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:12.935 [2024-07-15 21:33:03.593146] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:12.935 [2024-07-15 21:33:03.617149] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:12.935 [2024-07-15 21:33:03.622253] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:12.935 passed 00:11:12.935 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 21:33:03.712827] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:12.935 [2024-07-15 21:33:03.714143] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:11:12.935 [2024-07-15 21:33:03.714181] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:11:12.935 [2024-07-15 21:33:03.715857] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:13.191 passed 00:11:13.191 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 21:33:03.804561] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:13.191 [2024-07-15 21:33:03.896146] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:11:13.191 [2024-07-15 21:33:03.904147] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:11:13.191 [2024-07-15 21:33:03.912150] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:11:13.191 [2024-07-15 21:33:03.920161] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:11:13.191 [2024-07-15 21:33:03.949280] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:13.191 passed 00:11:13.447 Test: admin_create_io_sq_verify_pc ...[2024-07-15 21:33:04.038452] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:13.447 [2024-07-15 21:33:04.054167] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:11:13.447 [2024-07-15 21:33:04.071756] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:13.447 passed 00:11:13.447 Test: admin_create_io_qp_max_qps ...[2024-07-15 21:33:04.162333] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:14.818 [2024-07-15 21:33:05.265153] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:11:15.075 [2024-07-15 21:33:05.636542] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:15.075 passed 00:11:15.075 Test: admin_create_io_sq_shared_cq ...[2024-07-15 21:33:05.734915] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:15.076 [2024-07-15 21:33:05.868163] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:15.333 [2024-07-15 21:33:05.905246] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:15.333 passed 00:11:15.333 00:11:15.333 Run Summary: Type Total Ran Passed Failed Inactive 00:11:15.333 suites 1 1 n/a 0 0 00:11:15.333 tests 18 18 18 0 0 00:11:15.333 asserts 360 360 360 0 n/a 00:11:15.333 00:11:15.333 Elapsed time = 1.617 seconds 00:11:15.333 21:33:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 314192 00:11:15.333 21:33:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 314192 ']' 00:11:15.333 21:33:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 314192 00:11:15.333 21:33:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:11:15.333 21:33:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:15.333 21:33:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 314192 00:11:15.333 21:33:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:15.333 21:33:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:15.333 21:33:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 314192' 00:11:15.333 killing process with pid 314192 00:11:15.333 21:33:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 314192 00:11:15.333 21:33:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 314192 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:11:15.592 00:11:15.592 real 0m5.770s 00:11:15.592 user 0m16.317s 00:11:15.592 sys 0m0.493s 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:15.592 ************************************ 00:11:15.592 END TEST nvmf_vfio_user_nvme_compliance 00:11:15.592 ************************************ 00:11:15.592 21:33:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:15.592 21:33:06 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:15.592 21:33:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:15.592 21:33:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:15.592 21:33:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:15.592 ************************************ 00:11:15.592 START TEST nvmf_vfio_user_fuzz 00:11:15.592 ************************************ 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:15.592 * Looking for test storage... 00:11:15.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=314915 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 314915' 00:11:15.592 Process pid: 314915 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 314915 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 314915 ']' 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:15.592 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:15.850 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:15.850 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:11:15.850 21:33:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:11:17.221 21:33:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:17.221 21:33:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.221 21:33:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:17.221 21:33:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.221 21:33:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:11:17.221 21:33:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:17.221 21:33:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.221 21:33:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:17.221 malloc0 00:11:17.221 21:33:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.221 21:33:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:11:17.222 21:33:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.222 21:33:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:17.222 21:33:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.222 21:33:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:17.222 21:33:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.222 21:33:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:17.222 21:33:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.222 21:33:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:17.222 21:33:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.222 21:33:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:17.222 21:33:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.222 21:33:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:11:17.222 21:33:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:49.279 Fuzzing completed. Shutting down the fuzz application 00:11:49.279 00:11:49.279 Dumping successful admin opcodes: 00:11:49.279 8, 9, 10, 24, 00:11:49.279 Dumping successful io opcodes: 00:11:49.279 0, 00:11:49.279 NS: 0x200003a1ef00 I/O qp, Total commands completed: 654551, total successful commands: 2543, random_seed: 4097696256 00:11:49.279 NS: 0x200003a1ef00 admin qp, Total commands completed: 154916, total successful commands: 1252, random_seed: 2230121920 00:11:49.279 21:33:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:49.279 21:33:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.280 21:33:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:49.280 21:33:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.280 21:33:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 314915 00:11:49.280 21:33:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 314915 ']' 00:11:49.280 21:33:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 314915 00:11:49.280 21:33:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:11:49.280 21:33:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:49.280 21:33:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 314915 00:11:49.280 21:33:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:49.280 21:33:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:49.280 21:33:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 314915' 00:11:49.280 killing process with pid 314915 00:11:49.280 21:33:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 314915 00:11:49.280 21:33:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 314915 00:11:49.280 21:33:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:49.280 21:33:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:49.280 00:11:49.280 real 0m32.168s 00:11:49.280 user 0m31.714s 00:11:49.280 sys 0m27.568s 00:11:49.280 21:33:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:49.280 21:33:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:49.280 ************************************ 00:11:49.280 END TEST nvmf_vfio_user_fuzz 00:11:49.280 ************************************ 00:11:49.280 21:33:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:49.280 21:33:38 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:49.280 21:33:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:49.280 21:33:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:49.280 21:33:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:49.280 ************************************ 00:11:49.280 START TEST nvmf_host_management 00:11:49.280 ************************************ 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:49.280 * Looking for test storage... 00:11:49.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:49.280 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:49.281 21:33:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:11:49.281 21:33:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:49.538 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:49.538 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:11:49.538 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:49.538 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:49.538 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:49.538 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:49.538 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:11:49.539 Found 0000:08:00.0 (0x8086 - 0x159b) 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:11:49.539 Found 0000:08:00.1 (0x8086 - 0x159b) 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:11:49.539 Found net devices under 0000:08:00.0: cvl_0_0 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:11:49.539 Found net devices under 0000:08:00.1: cvl_0_1 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:49.539 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:49.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:11:49.797 00:11:49.797 --- 10.0.0.2 ping statistics --- 00:11:49.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.797 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:49.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:11:49.797 00:11:49.797 --- 10.0.0.1 ping statistics --- 00:11:49.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.797 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=319607 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 319607 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 319607 ']' 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:49.797 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:49.797 [2024-07-15 21:33:40.459717] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:11:49.797 [2024-07-15 21:33:40.459820] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.797 EAL: No free 2048 kB hugepages reported on node 1 00:11:49.797 [2024-07-15 21:33:40.527909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:50.055 [2024-07-15 21:33:40.646131] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.055 [2024-07-15 21:33:40.646196] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.055 [2024-07-15 21:33:40.646212] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.055 [2024-07-15 21:33:40.646225] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.055 [2024-07-15 21:33:40.646237] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.055 [2024-07-15 21:33:40.646323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.055 [2024-07-15 21:33:40.646377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.055 [2024-07-15 21:33:40.646473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.055 [2024-07-15 21:33:40.646466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:50.055 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:50.055 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:50.055 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:50.055 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:50.055 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:50.055 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.055 21:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:50.055 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.055 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:50.055 [2024-07-15 21:33:40.785839] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.055 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.055 21:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:50.055 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:50.055 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:50.055 21:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:50.055 21:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:50.055 21:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:50.055 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.055 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:50.055 Malloc0 00:11:50.055 [2024-07-15 21:33:40.843528] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.314 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.314 21:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:50.314 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:50.314 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:50.314 21:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=319734 00:11:50.314 21:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 319734 /var/tmp/bdevperf.sock 00:11:50.314 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 319734 ']' 00:11:50.314 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:50.314 21:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:50.314 21:33:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:50.314 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:50.314 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:50.314 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:50.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:50.314 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:50.314 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:50.314 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:50.314 21:33:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:50.314 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:50.314 { 00:11:50.314 "params": { 00:11:50.314 "name": "Nvme$subsystem", 00:11:50.314 "trtype": "$TEST_TRANSPORT", 00:11:50.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:50.314 "adrfam": "ipv4", 00:11:50.314 "trsvcid": "$NVMF_PORT", 00:11:50.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:50.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:50.314 "hdgst": ${hdgst:-false}, 00:11:50.314 "ddgst": ${ddgst:-false} 00:11:50.314 }, 00:11:50.314 "method": "bdev_nvme_attach_controller" 00:11:50.314 } 00:11:50.314 EOF 00:11:50.314 )") 00:11:50.314 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:50.314 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:50.314 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:50.314 21:33:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:50.314 "params": { 00:11:50.314 "name": "Nvme0", 00:11:50.314 "trtype": "tcp", 00:11:50.314 "traddr": "10.0.0.2", 00:11:50.314 "adrfam": "ipv4", 00:11:50.314 "trsvcid": "4420", 00:11:50.314 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:50.314 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:50.314 "hdgst": false, 00:11:50.314 "ddgst": false 00:11:50.314 }, 00:11:50.314 "method": "bdev_nvme_attach_controller" 00:11:50.314 }' 00:11:50.314 [2024-07-15 21:33:40.927673] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:11:50.314 [2024-07-15 21:33:40.927770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid319734 ] 00:11:50.314 EAL: No free 2048 kB hugepages reported on node 1 00:11:50.314 [2024-07-15 21:33:40.988158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.314 [2024-07-15 21:33:41.087698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.880 Running I/O for 10 seconds... 00:11:50.880 21:33:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:50.880 21:33:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:50.880 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:50.880 21:33:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.880 21:33:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:50.880 21:33:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.880 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:50.880 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:50.880 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:50.880 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:50.880 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:50.880 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:50.880 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:50.880 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:50.880 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:50.880 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:50.880 21:33:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.880 21:33:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:50.880 21:33:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.880 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:11:50.880 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:11:50.880 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:11:51.139 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:11:51.139 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:51.139 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:51.139 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:51.139 21:33:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.139 21:33:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:51.139 21:33:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.139 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=549 00:11:51.139 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 549 -ge 100 ']' 00:11:51.139 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:51.139 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:51.139 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:51.139 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:51.139 21:33:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.139 21:33:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:51.139 [2024-07-15 21:33:41.765911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.139 [2024-07-15 21:33:41.766019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.139 [2024-07-15 21:33:41.766037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.139 [2024-07-15 21:33:41.766050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.139 [2024-07-15 21:33:41.766063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.139 [2024-07-15 21:33:41.766076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.139 [2024-07-15 21:33:41.766089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.139 [2024-07-15 21:33:41.766102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.139 [2024-07-15 21:33:41.766115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c238d0 is same with the state(5) to be set 00:11:51.139 [2024-07-15 21:33:41.766545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.139 [2024-07-15 21:33:41.766575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.139 [2024-07-15 21:33:41.766611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.139 [2024-07-15 21:33:41.766634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.139 [2024-07-15 21:33:41.766660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.139 [2024-07-15 21:33:41.766681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.139 [2024-07-15 21:33:41.766705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.139 [2024-07-15 21:33:41.766728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.139 [2024-07-15 21:33:41.766763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.139 [2024-07-15 21:33:41.766784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.139 [2024-07-15 21:33:41.766805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.139 [2024-07-15 21:33:41.766823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.139 [2024-07-15 21:33:41.766843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.139 [2024-07-15 21:33:41.766862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.139 [2024-07-15 21:33:41.766882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.139 [2024-07-15 21:33:41.766900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.139 [2024-07-15 21:33:41.766921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.139 [2024-07-15 21:33:41.766942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.139 [2024-07-15 21:33:41.766963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.139 [2024-07-15 21:33:41.766984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.139 [2024-07-15 21:33:41.767011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.139 21:33:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.139 [2024-07-15 21:33:41.767033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.139 [2024-07-15 21:33:41.767056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.139 [2024-07-15 21:33:41.767076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.139 [2024-07-15 21:33:41.767101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.139 [2024-07-15 21:33:41.767123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.139 [2024-07-15 21:33:41.767154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.139 [2024-07-15 21:33:41.767176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.139 [2024-07-15 21:33:41.767207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.139 [2024-07-15 21:33:41.767231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.139 [2024-07-15 21:33:41.767254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.139 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:51.139 [2024-07-15 21:33:41.767280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.139 [2024-07-15 21:33:41.767309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.139 [2024-07-15 21:33:41.767332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.139 [2024-07-15 21:33:41.767361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.139 [2024-07-15 21:33:41.767382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.139 [2024-07-15 21:33:41.767409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.767433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 21:33:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.140 [2024-07-15 21:33:41.767459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.767482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.767505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.767528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.767552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.767574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 21:33:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:51.140 [2024-07-15 21:33:41.767598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.767622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.767648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.767673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.767699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.767735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.767764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.767789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.767808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.767823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.767839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.767854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.767874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.767890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.767906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.767921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.767938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.767953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.767969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.767984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.140 [2024-07-15 21:33:41.768864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.140 [2024-07-15 21:33:41.768879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.141 [2024-07-15 21:33:41.768895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.141 [2024-07-15 21:33:41.768910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.141 [2024-07-15 21:33:41.768926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.141 [2024-07-15 21:33:41.768941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.141 [2024-07-15 21:33:41.768958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.141 [2024-07-15 21:33:41.768972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.141 [2024-07-15 21:33:41.768988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:51.141 [2024-07-15 21:33:41.769003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.141 [2024-07-15 21:33:41.769072] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2034840 was disconnected and freed. reset controller. 00:11:51.141 [2024-07-15 21:33:41.770205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:51.141 task offset: 81920 on job bdev=Nvme0n1 fails 00:11:51.141 00:11:51.141 Latency(us) 00:11:51.141 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:51.141 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:51.141 Job: Nvme0n1 ended in about 0.40 seconds with error 00:11:51.141 Verification LBA range: start 0x0 length 0x400 00:11:51.141 Nvme0n1 : 0.40 1598.20 99.89 159.82 0.00 35273.54 3519.53 33399.09 00:11:51.141 =================================================================================================================== 00:11:51.141 Total : 1598.20 99.89 159.82 0.00 35273.54 3519.53 33399.09 00:11:51.141 [2024-07-15 21:33:41.772132] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:51.141 [2024-07-15 21:33:41.772168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c238d0 (9): Bad file descriptor 00:11:51.141 21:33:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.141 21:33:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:51.141 [2024-07-15 21:33:41.819269] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:52.072 21:33:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 319734 00:11:52.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (319734) - No such process 00:11:52.073 21:33:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:52.073 21:33:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:52.073 21:33:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:52.073 21:33:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:52.073 21:33:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:52.073 21:33:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:52.073 21:33:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:52.073 21:33:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:52.073 { 00:11:52.073 "params": { 00:11:52.073 "name": "Nvme$subsystem", 00:11:52.073 "trtype": "$TEST_TRANSPORT", 00:11:52.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:52.073 "adrfam": "ipv4", 00:11:52.073 "trsvcid": "$NVMF_PORT", 00:11:52.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:52.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:52.073 "hdgst": ${hdgst:-false}, 00:11:52.073 "ddgst": ${ddgst:-false} 00:11:52.073 }, 00:11:52.073 "method": "bdev_nvme_attach_controller" 00:11:52.073 } 00:11:52.073 EOF 00:11:52.073 )") 00:11:52.073 21:33:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:52.073 21:33:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:52.073 21:33:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:52.073 21:33:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:52.073 "params": { 00:11:52.073 "name": "Nvme0", 00:11:52.073 "trtype": "tcp", 00:11:52.073 "traddr": "10.0.0.2", 00:11:52.073 "adrfam": "ipv4", 00:11:52.073 "trsvcid": "4420", 00:11:52.073 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:52.073 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:52.073 "hdgst": false, 00:11:52.073 "ddgst": false 00:11:52.073 }, 00:11:52.073 "method": "bdev_nvme_attach_controller" 00:11:52.073 }' 00:11:52.073 [2024-07-15 21:33:42.827822] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:11:52.073 [2024-07-15 21:33:42.827919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid319867 ] 00:11:52.073 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.330 [2024-07-15 21:33:42.885419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.330 [2024-07-15 21:33:42.989397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.586 Running I/O for 1 seconds... 00:11:53.513 00:11:53.513 Latency(us) 00:11:53.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.513 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:53.513 Verification LBA range: start 0x0 length 0x400 00:11:53.513 Nvme0n1 : 1.01 1639.89 102.49 0.00 0.00 38352.93 7136.14 32234.00 00:11:53.513 =================================================================================================================== 00:11:53.513 Total : 1639.89 102.49 0.00 0.00 38352.93 7136.14 32234.00 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:53.770 rmmod nvme_tcp 00:11:53.770 rmmod nvme_fabrics 00:11:53.770 rmmod nvme_keyring 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 319607 ']' 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 319607 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 319607 ']' 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 319607 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 319607 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 319607' 00:11:53.770 killing process with pid 319607 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 319607 00:11:53.770 21:33:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 319607 00:11:54.028 [2024-07-15 21:33:44.628844] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:54.028 21:33:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:54.028 21:33:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:54.028 21:33:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:54.028 21:33:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:54.028 21:33:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:54.028 21:33:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.028 21:33:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:54.028 21:33:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.931 21:33:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:55.931 21:33:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:55.931 00:11:55.931 real 0m8.241s 00:11:55.931 user 0m18.905s 00:11:55.931 sys 0m2.332s 00:11:55.931 21:33:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:55.931 21:33:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:55.931 ************************************ 00:11:55.931 END TEST nvmf_host_management 00:11:55.931 ************************************ 00:11:56.190 21:33:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:56.190 21:33:46 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:56.190 21:33:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:56.190 21:33:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:56.190 21:33:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:56.190 ************************************ 00:11:56.190 START TEST nvmf_lvol 00:11:56.190 ************************************ 00:11:56.190 21:33:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:56.190 * Looking for test storage... 00:11:56.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.190 21:33:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.190 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:56.190 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.190 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.190 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.190 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.190 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.190 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.190 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.190 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.190 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.190 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.190 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:56.190 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:56.190 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.190 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:56.191 21:33:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:11:58.098 Found 0000:08:00.0 (0x8086 - 0x159b) 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:11:58.098 Found 0000:08:00.1 (0x8086 - 0x159b) 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:11:58.098 Found net devices under 0000:08:00.0: cvl_0_0 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:11:58.098 Found net devices under 0000:08:00.1: cvl_0_1 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:58.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:58.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:11:58.098 00:11:58.098 --- 10.0.0.2 ping statistics --- 00:11:58.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.098 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:58.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:58.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:11:58.098 00:11:58.098 --- 10.0.0.1 ping statistics --- 00:11:58.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.098 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:58.098 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:58.099 21:33:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:58.099 21:33:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:58.099 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=321559 00:11:58.099 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:58.099 21:33:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 321559 00:11:58.099 21:33:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 321559 ']' 00:11:58.099 21:33:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.099 21:33:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:58.099 21:33:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.099 21:33:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:58.099 21:33:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:58.099 [2024-07-15 21:33:48.718250] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:11:58.099 [2024-07-15 21:33:48.718351] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.099 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.099 [2024-07-15 21:33:48.798182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:58.357 [2024-07-15 21:33:48.952443] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.357 [2024-07-15 21:33:48.952514] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.357 [2024-07-15 21:33:48.952546] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:58.357 [2024-07-15 21:33:48.952571] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:58.357 [2024-07-15 21:33:48.952593] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.357 [2024-07-15 21:33:48.952685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.357 [2024-07-15 21:33:48.952755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.357 [2024-07-15 21:33:48.952745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.357 21:33:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:58.357 21:33:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:11:58.357 21:33:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:58.357 21:33:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:58.357 21:33:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:58.357 21:33:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.357 21:33:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:58.616 [2024-07-15 21:33:49.377167] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.616 21:33:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:59.182 21:33:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:59.182 21:33:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:59.440 21:33:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:59.440 21:33:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:59.698 21:33:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:59.956 21:33:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0b4d88a9-c064-4cf9-9151-b7d707c13232 00:11:59.956 21:33:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0b4d88a9-c064-4cf9-9151-b7d707c13232 lvol 20 00:12:00.213 21:33:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ed68f1b8-083a-464c-b1bc-1be56952ca95 00:12:00.213 21:33:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:00.471 21:33:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ed68f1b8-083a-464c-b1bc-1be56952ca95 00:12:01.037 21:33:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:01.037 [2024-07-15 21:33:51.797426] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.037 21:33:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:01.601 21:33:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=321893 00:12:01.601 21:33:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:01.601 21:33:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:01.601 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.534 21:33:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ed68f1b8-083a-464c-b1bc-1be56952ca95 MY_SNAPSHOT 00:12:02.791 21:33:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=bab06b32-4e5b-44a6-b8af-0dc39ce60785 00:12:02.791 21:33:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ed68f1b8-083a-464c-b1bc-1be56952ca95 30 00:12:03.048 21:33:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone bab06b32-4e5b-44a6-b8af-0dc39ce60785 MY_CLONE 00:12:03.613 21:33:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=942b88e1-fc85-4004-9bef-fcec0c18d879 00:12:03.613 21:33:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 942b88e1-fc85-4004-9bef-fcec0c18d879 00:12:04.178 21:33:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 321893 00:12:12.282 Initializing NVMe Controllers 00:12:12.282 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:12.282 Controller IO queue size 128, less than required. 00:12:12.282 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:12.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:12.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:12.282 Initialization complete. Launching workers. 00:12:12.282 ======================================================== 00:12:12.282 Latency(us) 00:12:12.282 Device Information : IOPS MiB/s Average min max 00:12:12.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11347.49 44.33 11285.03 1695.39 73009.96 00:12:12.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11564.49 45.17 11069.65 2146.16 72817.77 00:12:12.282 ======================================================== 00:12:12.282 Total : 22911.99 89.50 11176.32 1695.39 73009.96 00:12:12.282 00:12:12.282 21:34:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:12.282 21:34:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ed68f1b8-083a-464c-b1bc-1be56952ca95 00:12:12.282 21:34:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0b4d88a9-c064-4cf9-9151-b7d707c13232 00:12:12.849 21:34:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:12.849 21:34:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:12.849 21:34:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:12.849 21:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:12.849 21:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:12:12.849 21:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:12.849 21:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:12:12.849 21:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:12.849 21:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:12.849 rmmod nvme_tcp 00:12:12.849 rmmod nvme_fabrics 00:12:12.849 rmmod nvme_keyring 00:12:12.849 21:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:12.849 21:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:12:12.849 21:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:12:12.849 21:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 321559 ']' 00:12:12.849 21:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 321559 00:12:12.849 21:34:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 321559 ']' 00:12:12.849 21:34:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 321559 00:12:12.849 21:34:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:12:12.849 21:34:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:12.849 21:34:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 321559 00:12:12.849 21:34:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:12.849 21:34:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:12.849 21:34:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 321559' 00:12:12.849 killing process with pid 321559 00:12:12.849 21:34:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 321559 00:12:12.849 21:34:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 321559 00:12:13.107 21:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:13.107 21:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:13.107 21:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:13.107 21:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:13.107 21:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:13.107 21:34:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.107 21:34:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:13.107 21:34:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.015 21:34:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:15.015 00:12:15.015 real 0m18.942s 00:12:15.015 user 1m6.049s 00:12:15.015 sys 0m5.453s 00:12:15.015 21:34:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:15.015 21:34:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:15.015 ************************************ 00:12:15.015 END TEST nvmf_lvol 00:12:15.015 ************************************ 00:12:15.015 21:34:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:15.015 21:34:05 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:15.015 21:34:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:15.015 21:34:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.015 21:34:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:15.015 ************************************ 00:12:15.015 START TEST nvmf_lvs_grow 00:12:15.015 ************************************ 00:12:15.015 21:34:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:15.273 * Looking for test storage... 00:12:15.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:12:15.273 21:34:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:12:17.175 Found 0000:08:00.0 (0x8086 - 0x159b) 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:12:17.175 Found 0000:08:00.1 (0x8086 - 0x159b) 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:12:17.175 Found net devices under 0000:08:00.0: cvl_0_0 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:12:17.175 Found net devices under 0000:08:00.1: cvl_0_1 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:17.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:17.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:12:17.175 00:12:17.175 --- 10.0.0.2 ping statistics --- 00:12:17.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.175 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:12:17.175 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:17.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:17.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:12:17.175 00:12:17.175 --- 10.0.0.1 ping statistics --- 00:12:17.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.175 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=324401 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 324401 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 324401 ']' 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:17.176 [2024-07-15 21:34:07.671796] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:12:17.176 [2024-07-15 21:34:07.671900] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.176 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.176 [2024-07-15 21:34:07.737463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.176 [2024-07-15 21:34:07.856028] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.176 [2024-07-15 21:34:07.856085] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.176 [2024-07-15 21:34:07.856101] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.176 [2024-07-15 21:34:07.856115] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.176 [2024-07-15 21:34:07.856127] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.176 [2024-07-15 21:34:07.856163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:17.176 21:34:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:17.433 21:34:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.433 21:34:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:17.691 [2024-07-15 21:34:08.268764] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.691 21:34:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:17.691 21:34:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:17.691 21:34:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:17.691 21:34:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:17.691 ************************************ 00:12:17.691 START TEST lvs_grow_clean 00:12:17.691 ************************************ 00:12:17.691 21:34:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:12:17.691 21:34:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:17.691 21:34:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:17.691 21:34:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:17.691 21:34:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:17.691 21:34:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:17.691 21:34:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:17.691 21:34:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:17.691 21:34:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:17.691 21:34:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:17.948 21:34:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:17.948 21:34:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:18.204 21:34:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=87fa608a-bc26-46b6-81ad-441f16262d72 00:12:18.204 21:34:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:18.204 21:34:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87fa608a-bc26-46b6-81ad-441f16262d72 00:12:18.479 21:34:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:18.479 21:34:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:18.479 21:34:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 87fa608a-bc26-46b6-81ad-441f16262d72 lvol 150 00:12:18.736 21:34:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5f4346f0-0a9d-4c18-a095-b7f7f75e8f32 00:12:18.736 21:34:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:18.736 21:34:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:18.993 [2024-07-15 21:34:09.661485] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:18.993 [2024-07-15 21:34:09.661570] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:18.993 true 00:12:18.993 21:34:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:18.993 21:34:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87fa608a-bc26-46b6-81ad-441f16262d72 00:12:19.250 21:34:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:19.251 21:34:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:19.507 21:34:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5f4346f0-0a9d-4c18-a095-b7f7f75e8f32 00:12:19.764 21:34:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:20.022 [2024-07-15 21:34:10.648439] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.022 21:34:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:20.279 21:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=324743 00:12:20.279 21:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:20.279 21:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 324743 /var/tmp/bdevperf.sock 00:12:20.279 21:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 324743 ']' 00:12:20.279 21:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:20.279 21:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:20.279 21:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:20.279 21:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:20.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:20.279 21:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:20.279 21:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:20.279 [2024-07-15 21:34:11.050830] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:12:20.279 [2024-07-15 21:34:11.050941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid324743 ] 00:12:20.536 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.536 [2024-07-15 21:34:11.107052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.536 [2024-07-15 21:34:11.204151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.536 21:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:20.536 21:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:12:20.536 21:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:21.130 Nvme0n1 00:12:21.130 21:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:21.389 [ 00:12:21.389 { 00:12:21.389 "name": "Nvme0n1", 00:12:21.389 "aliases": [ 00:12:21.389 "5f4346f0-0a9d-4c18-a095-b7f7f75e8f32" 00:12:21.389 ], 00:12:21.389 "product_name": "NVMe disk", 00:12:21.389 "block_size": 4096, 00:12:21.389 "num_blocks": 38912, 00:12:21.389 "uuid": "5f4346f0-0a9d-4c18-a095-b7f7f75e8f32", 00:12:21.389 "assigned_rate_limits": { 00:12:21.389 "rw_ios_per_sec": 0, 00:12:21.389 "rw_mbytes_per_sec": 0, 00:12:21.389 "r_mbytes_per_sec": 0, 00:12:21.389 "w_mbytes_per_sec": 0 00:12:21.389 }, 00:12:21.389 "claimed": false, 00:12:21.389 "zoned": false, 00:12:21.389 "supported_io_types": { 00:12:21.389 "read": true, 00:12:21.389 "write": true, 00:12:21.389 "unmap": true, 00:12:21.389 "flush": true, 00:12:21.389 "reset": true, 00:12:21.389 "nvme_admin": true, 00:12:21.389 "nvme_io": true, 00:12:21.389 "nvme_io_md": false, 00:12:21.389 "write_zeroes": true, 00:12:21.389 "zcopy": false, 00:12:21.389 "get_zone_info": false, 00:12:21.389 "zone_management": false, 00:12:21.389 "zone_append": false, 00:12:21.389 "compare": true, 00:12:21.389 "compare_and_write": true, 00:12:21.389 "abort": true, 00:12:21.389 "seek_hole": false, 00:12:21.389 "seek_data": false, 00:12:21.389 "copy": true, 00:12:21.389 "nvme_iov_md": false 00:12:21.389 }, 00:12:21.389 "memory_domains": [ 00:12:21.389 { 00:12:21.389 "dma_device_id": "system", 00:12:21.389 "dma_device_type": 1 00:12:21.389 } 00:12:21.389 ], 00:12:21.389 "driver_specific": { 00:12:21.389 "nvme": [ 00:12:21.389 { 00:12:21.389 "trid": { 00:12:21.389 "trtype": "TCP", 00:12:21.389 "adrfam": "IPv4", 00:12:21.389 "traddr": "10.0.0.2", 00:12:21.389 "trsvcid": "4420", 00:12:21.389 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:21.389 }, 00:12:21.389 "ctrlr_data": { 00:12:21.389 "cntlid": 1, 00:12:21.389 "vendor_id": "0x8086", 00:12:21.389 "model_number": "SPDK bdev Controller", 00:12:21.389 "serial_number": "SPDK0", 00:12:21.389 "firmware_revision": "24.09", 00:12:21.389 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:21.389 "oacs": { 00:12:21.389 "security": 0, 00:12:21.389 "format": 0, 00:12:21.389 "firmware": 0, 00:12:21.389 "ns_manage": 0 00:12:21.389 }, 00:12:21.389 "multi_ctrlr": true, 00:12:21.389 "ana_reporting": false 00:12:21.389 }, 00:12:21.389 "vs": { 00:12:21.389 "nvme_version": "1.3" 00:12:21.389 }, 00:12:21.389 "ns_data": { 00:12:21.389 "id": 1, 00:12:21.389 "can_share": true 00:12:21.389 } 00:12:21.389 } 00:12:21.389 ], 00:12:21.389 "mp_policy": "active_passive" 00:12:21.389 } 00:12:21.389 } 00:12:21.389 ] 00:12:21.389 21:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=324847 00:12:21.389 21:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:21.389 21:34:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:21.389 Running I/O for 10 seconds... 00:12:22.323 Latency(us) 00:12:22.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.323 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:22.323 Nvme0n1 : 1.00 16579.00 64.76 0.00 0.00 0.00 0.00 0.00 00:12:22.323 =================================================================================================================== 00:12:22.323 Total : 16579.00 64.76 0.00 0.00 0.00 0.00 0.00 00:12:22.323 00:12:23.255 21:34:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 87fa608a-bc26-46b6-81ad-441f16262d72 00:12:23.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:23.513 Nvme0n1 : 2.00 16735.00 65.37 0.00 0.00 0.00 0.00 0.00 00:12:23.513 =================================================================================================================== 00:12:23.513 Total : 16735.00 65.37 0.00 0.00 0.00 0.00 0.00 00:12:23.513 00:12:23.513 true 00:12:23.771 21:34:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87fa608a-bc26-46b6-81ad-441f16262d72 00:12:23.771 21:34:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:24.029 21:34:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:24.029 21:34:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:24.029 21:34:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 324847 00:12:24.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:24.595 Nvme0n1 : 3.00 16830.00 65.74 0.00 0.00 0.00 0.00 0.00 00:12:24.595 =================================================================================================================== 00:12:24.595 Total : 16830.00 65.74 0.00 0.00 0.00 0.00 0.00 00:12:24.595 00:12:25.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:25.527 Nvme0n1 : 4.00 16909.50 66.05 0.00 0.00 0.00 0.00 0.00 00:12:25.527 =================================================================================================================== 00:12:25.527 Total : 16909.50 66.05 0.00 0.00 0.00 0.00 0.00 00:12:25.527 00:12:26.472 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:26.472 Nvme0n1 : 5.00 16982.60 66.34 0.00 0.00 0.00 0.00 0.00 00:12:26.472 =================================================================================================================== 00:12:26.472 Total : 16982.60 66.34 0.00 0.00 0.00 0.00 0.00 00:12:26.472 00:12:27.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:27.450 Nvme0n1 : 6.00 16999.33 66.40 0.00 0.00 0.00 0.00 0.00 00:12:27.450 =================================================================================================================== 00:12:27.450 Total : 16999.33 66.40 0.00 0.00 0.00 0.00 0.00 00:12:27.450 00:12:28.383 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:28.383 Nvme0n1 : 7.00 17038.29 66.56 0.00 0.00 0.00 0.00 0.00 00:12:28.383 =================================================================================================================== 00:12:28.383 Total : 17038.29 66.56 0.00 0.00 0.00 0.00 0.00 00:12:28.383 00:12:29.754 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:29.754 Nvme0n1 : 8.00 17067.50 66.67 0.00 0.00 0.00 0.00 0.00 00:12:29.754 =================================================================================================================== 00:12:29.754 Total : 17067.50 66.67 0.00 0.00 0.00 0.00 0.00 00:12:29.754 00:12:30.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:30.687 Nvme0n1 : 9.00 17090.22 66.76 0.00 0.00 0.00 0.00 0.00 00:12:30.687 =================================================================================================================== 00:12:30.687 Total : 17090.22 66.76 0.00 0.00 0.00 0.00 0.00 00:12:30.687 00:12:31.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:31.621 Nvme0n1 : 10.00 17114.90 66.86 0.00 0.00 0.00 0.00 0.00 00:12:31.621 =================================================================================================================== 00:12:31.621 Total : 17114.90 66.86 0.00 0.00 0.00 0.00 0.00 00:12:31.621 00:12:31.621 00:12:31.621 Latency(us) 00:12:31.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:31.621 Nvme0n1 : 10.00 17121.00 66.88 0.00 0.00 7471.69 3835.07 14757.74 00:12:31.621 =================================================================================================================== 00:12:31.621 Total : 17121.00 66.88 0.00 0.00 7471.69 3835.07 14757.74 00:12:31.621 0 00:12:31.621 21:34:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 324743 00:12:31.621 21:34:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 324743 ']' 00:12:31.621 21:34:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 324743 00:12:31.621 21:34:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:12:31.621 21:34:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:31.621 21:34:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 324743 00:12:31.621 21:34:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:31.621 21:34:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:31.621 21:34:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 324743' 00:12:31.621 killing process with pid 324743 00:12:31.621 21:34:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 324743 00:12:31.621 Received shutdown signal, test time was about 10.000000 seconds 00:12:31.621 00:12:31.621 Latency(us) 00:12:31.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.621 =================================================================================================================== 00:12:31.621 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:31.621 21:34:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 324743 00:12:31.621 21:34:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:31.879 21:34:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:32.445 21:34:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:32.445 21:34:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87fa608a-bc26-46b6-81ad-441f16262d72 00:12:32.703 21:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:32.703 21:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:32.703 21:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:32.959 [2024-07-15 21:34:23.528091] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:32.959 21:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87fa608a-bc26-46b6-81ad-441f16262d72 00:12:32.959 21:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:12:32.959 21:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87fa608a-bc26-46b6-81ad-441f16262d72 00:12:32.959 21:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:32.959 21:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:32.959 21:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:32.959 21:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:32.959 21:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:32.959 21:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:32.959 21:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:32.959 21:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:32.959 21:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87fa608a-bc26-46b6-81ad-441f16262d72 00:12:33.216 request: 00:12:33.216 { 00:12:33.216 "uuid": "87fa608a-bc26-46b6-81ad-441f16262d72", 00:12:33.216 "method": "bdev_lvol_get_lvstores", 00:12:33.216 "req_id": 1 00:12:33.216 } 00:12:33.216 Got JSON-RPC error response 00:12:33.216 response: 00:12:33.216 { 00:12:33.216 "code": -19, 00:12:33.216 "message": "No such device" 00:12:33.216 } 00:12:33.216 21:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:12:33.216 21:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:33.216 21:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:33.216 21:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:33.216 21:34:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:33.473 aio_bdev 00:12:33.473 21:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5f4346f0-0a9d-4c18-a095-b7f7f75e8f32 00:12:33.473 21:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=5f4346f0-0a9d-4c18-a095-b7f7f75e8f32 00:12:33.473 21:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:33.473 21:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:12:33.473 21:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:33.473 21:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:33.473 21:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:33.731 21:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5f4346f0-0a9d-4c18-a095-b7f7f75e8f32 -t 2000 00:12:33.988 [ 00:12:33.988 { 00:12:33.988 "name": "5f4346f0-0a9d-4c18-a095-b7f7f75e8f32", 00:12:33.988 "aliases": [ 00:12:33.988 "lvs/lvol" 00:12:33.988 ], 00:12:33.988 "product_name": "Logical Volume", 00:12:33.988 "block_size": 4096, 00:12:33.988 "num_blocks": 38912, 00:12:33.988 "uuid": "5f4346f0-0a9d-4c18-a095-b7f7f75e8f32", 00:12:33.988 "assigned_rate_limits": { 00:12:33.988 "rw_ios_per_sec": 0, 00:12:33.988 "rw_mbytes_per_sec": 0, 00:12:33.988 "r_mbytes_per_sec": 0, 00:12:33.988 "w_mbytes_per_sec": 0 00:12:33.988 }, 00:12:33.988 "claimed": false, 00:12:33.988 "zoned": false, 00:12:33.988 "supported_io_types": { 00:12:33.988 "read": true, 00:12:33.988 "write": true, 00:12:33.988 "unmap": true, 00:12:33.988 "flush": false, 00:12:33.988 "reset": true, 00:12:33.988 "nvme_admin": false, 00:12:33.988 "nvme_io": false, 00:12:33.988 "nvme_io_md": false, 00:12:33.988 "write_zeroes": true, 00:12:33.988 "zcopy": false, 00:12:33.988 "get_zone_info": false, 00:12:33.988 "zone_management": false, 00:12:33.988 "zone_append": false, 00:12:33.988 "compare": false, 00:12:33.988 "compare_and_write": false, 00:12:33.988 "abort": false, 00:12:33.988 "seek_hole": true, 00:12:33.988 "seek_data": true, 00:12:33.988 "copy": false, 00:12:33.988 "nvme_iov_md": false 00:12:33.988 }, 00:12:33.988 "driver_specific": { 00:12:33.988 "lvol": { 00:12:33.988 "lvol_store_uuid": "87fa608a-bc26-46b6-81ad-441f16262d72", 00:12:33.988 "base_bdev": "aio_bdev", 00:12:33.988 "thin_provision": false, 00:12:33.988 "num_allocated_clusters": 38, 00:12:33.988 "snapshot": false, 00:12:33.988 "clone": false, 00:12:33.988 "esnap_clone": false 00:12:33.988 } 00:12:33.988 } 00:12:33.988 } 00:12:33.988 ] 00:12:33.988 21:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:12:33.988 21:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:33.988 21:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87fa608a-bc26-46b6-81ad-441f16262d72 00:12:34.263 21:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:34.263 21:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87fa608a-bc26-46b6-81ad-441f16262d72 00:12:34.263 21:34:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:34.520 21:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:34.520 21:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5f4346f0-0a9d-4c18-a095-b7f7f75e8f32 00:12:34.777 21:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 87fa608a-bc26-46b6-81ad-441f16262d72 00:12:35.342 21:34:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:35.342 21:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:35.342 00:12:35.342 real 0m17.790s 00:12:35.342 user 0m17.165s 00:12:35.342 sys 0m1.919s 00:12:35.342 21:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:35.342 21:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:35.342 ************************************ 00:12:35.342 END TEST lvs_grow_clean 00:12:35.342 ************************************ 00:12:35.342 21:34:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:35.342 21:34:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:35.342 21:34:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:35.342 21:34:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.342 21:34:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:35.601 ************************************ 00:12:35.601 START TEST lvs_grow_dirty 00:12:35.601 ************************************ 00:12:35.601 21:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:12:35.601 21:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:35.601 21:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:35.601 21:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:35.601 21:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:35.601 21:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:35.601 21:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:35.601 21:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:35.601 21:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:35.601 21:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:35.860 21:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:35.860 21:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:35.860 21:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=df8ca091-2688-429c-83da-a4271b163570 00:12:36.117 21:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df8ca091-2688-429c-83da-a4271b163570 00:12:36.117 21:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:36.375 21:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:36.375 21:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:36.375 21:34:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u df8ca091-2688-429c-83da-a4271b163570 lvol 150 00:12:36.632 21:34:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=7388f4a1-3dec-4eb3-b8a8-4a620f2e045d 00:12:36.632 21:34:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:36.632 21:34:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:36.890 [2024-07-15 21:34:27.471417] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:36.890 [2024-07-15 21:34:27.471475] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:36.890 true 00:12:36.890 21:34:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df8ca091-2688-429c-83da-a4271b163570 00:12:36.890 21:34:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:37.148 21:34:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:37.148 21:34:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:37.406 21:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7388f4a1-3dec-4eb3-b8a8-4a620f2e045d 00:12:37.664 21:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:37.922 [2024-07-15 21:34:28.530528] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.922 21:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:38.181 21:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=326411 00:12:38.181 21:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:38.181 21:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 326411 /var/tmp/bdevperf.sock 00:12:38.181 21:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 326411 ']' 00:12:38.181 21:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:38.181 21:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:38.181 21:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:38.181 21:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:38.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:38.181 21:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:38.181 21:34:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:38.181 [2024-07-15 21:34:28.835312] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:12:38.181 [2024-07-15 21:34:28.835395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid326411 ] 00:12:38.181 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.181 [2024-07-15 21:34:28.945992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.439 [2024-07-15 21:34:29.045704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.373 21:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.373 21:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:39.373 21:34:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:39.629 Nvme0n1 00:12:39.629 21:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:39.885 [ 00:12:39.885 { 00:12:39.885 "name": "Nvme0n1", 00:12:39.885 "aliases": [ 00:12:39.885 "7388f4a1-3dec-4eb3-b8a8-4a620f2e045d" 00:12:39.885 ], 00:12:39.885 "product_name": "NVMe disk", 00:12:39.885 "block_size": 4096, 00:12:39.885 "num_blocks": 38912, 00:12:39.885 "uuid": "7388f4a1-3dec-4eb3-b8a8-4a620f2e045d", 00:12:39.885 "assigned_rate_limits": { 00:12:39.885 "rw_ios_per_sec": 0, 00:12:39.885 "rw_mbytes_per_sec": 0, 00:12:39.885 "r_mbytes_per_sec": 0, 00:12:39.885 "w_mbytes_per_sec": 0 00:12:39.885 }, 00:12:39.885 "claimed": false, 00:12:39.885 "zoned": false, 00:12:39.885 "supported_io_types": { 00:12:39.885 "read": true, 00:12:39.885 "write": true, 00:12:39.885 "unmap": true, 00:12:39.885 "flush": true, 00:12:39.885 "reset": true, 00:12:39.885 "nvme_admin": true, 00:12:39.885 "nvme_io": true, 00:12:39.885 "nvme_io_md": false, 00:12:39.885 "write_zeroes": true, 00:12:39.885 "zcopy": false, 00:12:39.885 "get_zone_info": false, 00:12:39.885 "zone_management": false, 00:12:39.885 "zone_append": false, 00:12:39.885 "compare": true, 00:12:39.885 "compare_and_write": true, 00:12:39.885 "abort": true, 00:12:39.885 "seek_hole": false, 00:12:39.885 "seek_data": false, 00:12:39.885 "copy": true, 00:12:39.885 "nvme_iov_md": false 00:12:39.885 }, 00:12:39.885 "memory_domains": [ 00:12:39.885 { 00:12:39.885 "dma_device_id": "system", 00:12:39.885 "dma_device_type": 1 00:12:39.885 } 00:12:39.885 ], 00:12:39.885 "driver_specific": { 00:12:39.885 "nvme": [ 00:12:39.885 { 00:12:39.885 "trid": { 00:12:39.885 "trtype": "TCP", 00:12:39.885 "adrfam": "IPv4", 00:12:39.885 "traddr": "10.0.0.2", 00:12:39.885 "trsvcid": "4420", 00:12:39.885 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:39.885 }, 00:12:39.885 "ctrlr_data": { 00:12:39.885 "cntlid": 1, 00:12:39.885 "vendor_id": "0x8086", 00:12:39.885 "model_number": "SPDK bdev Controller", 00:12:39.885 "serial_number": "SPDK0", 00:12:39.885 "firmware_revision": "24.09", 00:12:39.885 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:39.885 "oacs": { 00:12:39.885 "security": 0, 00:12:39.885 "format": 0, 00:12:39.885 "firmware": 0, 00:12:39.885 "ns_manage": 0 00:12:39.885 }, 00:12:39.885 "multi_ctrlr": true, 00:12:39.885 "ana_reporting": false 00:12:39.885 }, 00:12:39.885 "vs": { 00:12:39.885 "nvme_version": "1.3" 00:12:39.885 }, 00:12:39.885 "ns_data": { 00:12:39.885 "id": 1, 00:12:39.885 "can_share": true 00:12:39.885 } 00:12:39.885 } 00:12:39.885 ], 00:12:39.885 "mp_policy": "active_passive" 00:12:39.885 } 00:12:39.885 } 00:12:39.885 ] 00:12:39.885 21:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=326628 00:12:39.885 21:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:39.886 21:34:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:40.142 Running I/O for 10 seconds... 00:12:41.071 Latency(us) 00:12:41.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:41.071 Nvme0n1 : 1.00 16257.00 63.50 0.00 0.00 0.00 0.00 0.00 00:12:41.071 =================================================================================================================== 00:12:41.071 Total : 16257.00 63.50 0.00 0.00 0.00 0.00 0.00 00:12:41.071 00:12:42.002 21:34:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u df8ca091-2688-429c-83da-a4271b163570 00:12:42.002 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:42.002 Nvme0n1 : 2.00 16447.00 64.25 0.00 0.00 0.00 0.00 0.00 00:12:42.002 =================================================================================================================== 00:12:42.002 Total : 16447.00 64.25 0.00 0.00 0.00 0.00 0.00 00:12:42.002 00:12:42.260 true 00:12:42.260 21:34:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df8ca091-2688-429c-83da-a4271b163570 00:12:42.260 21:34:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:42.517 21:34:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:42.517 21:34:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:42.517 21:34:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 326628 00:12:43.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:43.082 Nvme0n1 : 3.00 16510.33 64.49 0.00 0.00 0.00 0.00 0.00 00:12:43.082 =================================================================================================================== 00:12:43.082 Total : 16510.33 64.49 0.00 0.00 0.00 0.00 0.00 00:12:43.082 00:12:44.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:44.016 Nvme0n1 : 4.00 16573.75 64.74 0.00 0.00 0.00 0.00 0.00 00:12:44.016 =================================================================================================================== 00:12:44.016 Total : 16573.75 64.74 0.00 0.00 0.00 0.00 0.00 00:12:44.016 00:12:44.969 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:44.969 Nvme0n1 : 5.00 16618.60 64.92 0.00 0.00 0.00 0.00 0.00 00:12:44.969 =================================================================================================================== 00:12:44.969 Total : 16618.60 64.92 0.00 0.00 0.00 0.00 0.00 00:12:44.969 00:12:46.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:46.340 Nvme0n1 : 6.00 16664.00 65.09 0.00 0.00 0.00 0.00 0.00 00:12:46.340 =================================================================================================================== 00:12:46.340 Total : 16664.00 65.09 0.00 0.00 0.00 0.00 0.00 00:12:46.340 00:12:47.269 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:47.269 Nvme0n1 : 7.00 16696.71 65.22 0.00 0.00 0.00 0.00 0.00 00:12:47.269 =================================================================================================================== 00:12:47.269 Total : 16696.71 65.22 0.00 0.00 0.00 0.00 0.00 00:12:47.269 00:12:48.200 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:48.200 Nvme0n1 : 8.00 16728.88 65.35 0.00 0.00 0.00 0.00 0.00 00:12:48.200 =================================================================================================================== 00:12:48.200 Total : 16728.88 65.35 0.00 0.00 0.00 0.00 0.00 00:12:48.200 00:12:49.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:49.133 Nvme0n1 : 9.00 16761.00 65.47 0.00 0.00 0.00 0.00 0.00 00:12:49.133 =================================================================================================================== 00:12:49.133 Total : 16761.00 65.47 0.00 0.00 0.00 0.00 0.00 00:12:49.133 00:12:50.063 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:50.063 Nvme0n1 : 10.00 16774.00 65.52 0.00 0.00 0.00 0.00 0.00 00:12:50.063 =================================================================================================================== 00:12:50.063 Total : 16774.00 65.52 0.00 0.00 0.00 0.00 0.00 00:12:50.063 00:12:50.063 00:12:50.063 Latency(us) 00:12:50.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.063 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:50.063 Nvme0n1 : 10.01 16776.59 65.53 0.00 0.00 7625.24 2779.21 15049.01 00:12:50.063 =================================================================================================================== 00:12:50.063 Total : 16776.59 65.53 0.00 0.00 7625.24 2779.21 15049.01 00:12:50.063 0 00:12:50.063 21:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 326411 00:12:50.063 21:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 326411 ']' 00:12:50.063 21:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 326411 00:12:50.063 21:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:12:50.063 21:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:50.063 21:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 326411 00:12:50.063 21:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:50.063 21:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:50.063 21:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 326411' 00:12:50.063 killing process with pid 326411 00:12:50.063 21:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 326411 00:12:50.063 Received shutdown signal, test time was about 10.000000 seconds 00:12:50.063 00:12:50.063 Latency(us) 00:12:50.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.063 =================================================================================================================== 00:12:50.063 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:50.063 21:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 326411 00:12:50.320 21:34:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:50.577 21:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:50.834 21:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df8ca091-2688-429c-83da-a4271b163570 00:12:50.834 21:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:51.092 21:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:51.092 21:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:51.092 21:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 324401 00:12:51.092 21:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 324401 00:12:51.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 324401 Killed "${NVMF_APP[@]}" "$@" 00:12:51.092 21:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:51.092 21:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:51.092 21:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:51.092 21:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:51.092 21:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:51.092 21:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=327639 00:12:51.092 21:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:51.092 21:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 327639 00:12:51.092 21:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 327639 ']' 00:12:51.092 21:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.092 21:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:51.092 21:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.092 21:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:51.092 21:34:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:51.092 [2024-07-15 21:34:41.866704] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:12:51.092 [2024-07-15 21:34:41.866780] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.350 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.350 [2024-07-15 21:34:41.922094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.350 [2024-07-15 21:34:42.015847] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.350 [2024-07-15 21:34:42.015896] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.350 [2024-07-15 21:34:42.015926] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.350 [2024-07-15 21:34:42.015938] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.350 [2024-07-15 21:34:42.015948] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.350 [2024-07-15 21:34:42.015971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.350 21:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:51.350 21:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:51.350 21:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:51.350 21:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:51.350 21:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:51.350 21:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.350 21:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:51.912 [2024-07-15 21:34:42.412598] blobstore.c:4888:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:51.912 [2024-07-15 21:34:42.412730] blobstore.c:4835:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:51.912 [2024-07-15 21:34:42.412781] blobstore.c:4835:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:51.912 21:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:51.912 21:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7388f4a1-3dec-4eb3-b8a8-4a620f2e045d 00:12:51.912 21:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=7388f4a1-3dec-4eb3-b8a8-4a620f2e045d 00:12:51.912 21:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:51.912 21:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:51.912 21:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:51.912 21:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:51.912 21:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:52.170 21:34:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7388f4a1-3dec-4eb3-b8a8-4a620f2e045d -t 2000 00:12:52.427 [ 00:12:52.427 { 00:12:52.427 "name": "7388f4a1-3dec-4eb3-b8a8-4a620f2e045d", 00:12:52.427 "aliases": [ 00:12:52.427 "lvs/lvol" 00:12:52.427 ], 00:12:52.427 "product_name": "Logical Volume", 00:12:52.427 "block_size": 4096, 00:12:52.427 "num_blocks": 38912, 00:12:52.427 "uuid": "7388f4a1-3dec-4eb3-b8a8-4a620f2e045d", 00:12:52.427 "assigned_rate_limits": { 00:12:52.427 "rw_ios_per_sec": 0, 00:12:52.427 "rw_mbytes_per_sec": 0, 00:12:52.427 "r_mbytes_per_sec": 0, 00:12:52.427 "w_mbytes_per_sec": 0 00:12:52.427 }, 00:12:52.427 "claimed": false, 00:12:52.427 "zoned": false, 00:12:52.427 "supported_io_types": { 00:12:52.427 "read": true, 00:12:52.427 "write": true, 00:12:52.427 "unmap": true, 00:12:52.427 "flush": false, 00:12:52.427 "reset": true, 00:12:52.427 "nvme_admin": false, 00:12:52.427 "nvme_io": false, 00:12:52.427 "nvme_io_md": false, 00:12:52.427 "write_zeroes": true, 00:12:52.427 "zcopy": false, 00:12:52.427 "get_zone_info": false, 00:12:52.427 "zone_management": false, 00:12:52.427 "zone_append": false, 00:12:52.427 "compare": false, 00:12:52.427 "compare_and_write": false, 00:12:52.427 "abort": false, 00:12:52.427 "seek_hole": true, 00:12:52.427 "seek_data": true, 00:12:52.427 "copy": false, 00:12:52.427 "nvme_iov_md": false 00:12:52.427 }, 00:12:52.427 "driver_specific": { 00:12:52.427 "lvol": { 00:12:52.427 "lvol_store_uuid": "df8ca091-2688-429c-83da-a4271b163570", 00:12:52.427 "base_bdev": "aio_bdev", 00:12:52.427 "thin_provision": false, 00:12:52.427 "num_allocated_clusters": 38, 00:12:52.427 "snapshot": false, 00:12:52.427 "clone": false, 00:12:52.427 "esnap_clone": false 00:12:52.427 } 00:12:52.427 } 00:12:52.427 } 00:12:52.427 ] 00:12:52.427 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:52.428 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:52.428 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df8ca091-2688-429c-83da-a4271b163570 00:12:52.685 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:52.685 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:52.685 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df8ca091-2688-429c-83da-a4271b163570 00:12:52.943 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:52.943 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:52.943 [2024-07-15 21:34:43.713740] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:53.201 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df8ca091-2688-429c-83da-a4271b163570 00:12:53.201 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:12:53.201 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df8ca091-2688-429c-83da-a4271b163570 00:12:53.201 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:53.201 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:53.201 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:53.201 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:53.201 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:53.201 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:53.201 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:53.201 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:53.201 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df8ca091-2688-429c-83da-a4271b163570 00:12:53.201 request: 00:12:53.201 { 00:12:53.201 "uuid": "df8ca091-2688-429c-83da-a4271b163570", 00:12:53.201 "method": "bdev_lvol_get_lvstores", 00:12:53.201 "req_id": 1 00:12:53.201 } 00:12:53.201 Got JSON-RPC error response 00:12:53.201 response: 00:12:53.201 { 00:12:53.201 "code": -19, 00:12:53.201 "message": "No such device" 00:12:53.201 } 00:12:53.201 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:12:53.201 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:53.201 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:53.201 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:53.201 21:34:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:53.459 aio_bdev 00:12:53.459 21:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7388f4a1-3dec-4eb3-b8a8-4a620f2e045d 00:12:53.459 21:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=7388f4a1-3dec-4eb3-b8a8-4a620f2e045d 00:12:53.459 21:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:53.459 21:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:53.459 21:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:53.459 21:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:53.459 21:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:53.716 21:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7388f4a1-3dec-4eb3-b8a8-4a620f2e045d -t 2000 00:12:53.974 [ 00:12:53.974 { 00:12:53.974 "name": "7388f4a1-3dec-4eb3-b8a8-4a620f2e045d", 00:12:53.974 "aliases": [ 00:12:53.974 "lvs/lvol" 00:12:53.974 ], 00:12:53.974 "product_name": "Logical Volume", 00:12:53.974 "block_size": 4096, 00:12:53.974 "num_blocks": 38912, 00:12:53.974 "uuid": "7388f4a1-3dec-4eb3-b8a8-4a620f2e045d", 00:12:53.974 "assigned_rate_limits": { 00:12:53.974 "rw_ios_per_sec": 0, 00:12:53.974 "rw_mbytes_per_sec": 0, 00:12:53.974 "r_mbytes_per_sec": 0, 00:12:53.974 "w_mbytes_per_sec": 0 00:12:53.974 }, 00:12:53.974 "claimed": false, 00:12:53.974 "zoned": false, 00:12:53.974 "supported_io_types": { 00:12:53.974 "read": true, 00:12:53.974 "write": true, 00:12:53.974 "unmap": true, 00:12:53.974 "flush": false, 00:12:53.974 "reset": true, 00:12:53.974 "nvme_admin": false, 00:12:53.974 "nvme_io": false, 00:12:53.974 "nvme_io_md": false, 00:12:53.974 "write_zeroes": true, 00:12:53.974 "zcopy": false, 00:12:53.974 "get_zone_info": false, 00:12:53.974 "zone_management": false, 00:12:53.974 "zone_append": false, 00:12:53.974 "compare": false, 00:12:53.974 "compare_and_write": false, 00:12:53.974 "abort": false, 00:12:53.974 "seek_hole": true, 00:12:53.974 "seek_data": true, 00:12:53.974 "copy": false, 00:12:53.974 "nvme_iov_md": false 00:12:53.974 }, 00:12:53.975 "driver_specific": { 00:12:53.975 "lvol": { 00:12:53.975 "lvol_store_uuid": "df8ca091-2688-429c-83da-a4271b163570", 00:12:53.975 "base_bdev": "aio_bdev", 00:12:53.975 "thin_provision": false, 00:12:53.975 "num_allocated_clusters": 38, 00:12:53.975 "snapshot": false, 00:12:53.975 "clone": false, 00:12:53.975 "esnap_clone": false 00:12:53.975 } 00:12:53.975 } 00:12:53.975 } 00:12:53.975 ] 00:12:53.975 21:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:53.975 21:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df8ca091-2688-429c-83da-a4271b163570 00:12:53.975 21:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:54.233 21:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:54.233 21:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:54.233 21:34:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df8ca091-2688-429c-83da-a4271b163570 00:12:54.490 21:34:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:54.490 21:34:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7388f4a1-3dec-4eb3-b8a8-4a620f2e045d 00:12:54.748 21:34:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u df8ca091-2688-429c-83da-a4271b163570 00:12:55.006 21:34:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:55.263 21:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:55.263 00:12:55.263 real 0m19.873s 00:12:55.263 user 0m51.180s 00:12:55.263 sys 0m4.317s 00:12:55.263 21:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:55.263 21:34:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:55.263 ************************************ 00:12:55.263 END TEST lvs_grow_dirty 00:12:55.263 ************************************ 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:55.520 nvmf_trace.0 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:55.520 rmmod nvme_tcp 00:12:55.520 rmmod nvme_fabrics 00:12:55.520 rmmod nvme_keyring 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 327639 ']' 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 327639 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 327639 ']' 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 327639 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 327639 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 327639' 00:12:55.520 killing process with pid 327639 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 327639 00:12:55.520 21:34:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 327639 00:12:55.779 21:34:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:55.779 21:34:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:55.779 21:34:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:55.779 21:34:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:55.779 21:34:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:55.779 21:34:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.779 21:34:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.779 21:34:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.687 21:34:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:57.687 00:12:57.687 real 0m42.637s 00:12:57.687 user 1m13.929s 00:12:57.687 sys 0m7.850s 00:12:57.687 21:34:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:57.687 21:34:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:57.687 ************************************ 00:12:57.687 END TEST nvmf_lvs_grow 00:12:57.687 ************************************ 00:12:57.687 21:34:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:57.687 21:34:48 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:57.687 21:34:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:57.687 21:34:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:57.687 21:34:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:57.687 ************************************ 00:12:57.687 START TEST nvmf_bdev_io_wait 00:12:57.687 ************************************ 00:12:57.687 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:57.966 * Looking for test storage... 00:12:57.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.966 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:57.966 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:57.966 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.966 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.966 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.966 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.966 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.966 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.966 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.966 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.966 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.966 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.966 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:57.966 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:12:57.966 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.966 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.966 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:57.966 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.966 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:57.966 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.966 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:57.967 21:34:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:12:59.868 Found 0000:08:00.0 (0x8086 - 0x159b) 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:12:59.868 Found 0000:08:00.1 (0x8086 - 0x159b) 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:12:59.868 Found net devices under 0000:08:00.0: cvl_0_0 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:12:59.868 Found net devices under 0000:08:00.1: cvl_0_1 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:59.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:12:59.868 00:12:59.868 --- 10.0.0.2 ping statistics --- 00:12:59.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.868 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:59.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:12:59.868 00:12:59.868 --- 10.0.0.1 ping statistics --- 00:12:59.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.868 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=329603 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 329603 00:12:59.868 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 329603 ']' 00:12:59.869 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.869 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:59.869 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.869 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:59.869 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:59.869 [2024-07-15 21:34:50.414495] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:12:59.869 [2024-07-15 21:34:50.414598] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.869 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.869 [2024-07-15 21:34:50.483842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.869 [2024-07-15 21:34:50.605285] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.869 [2024-07-15 21:34:50.605338] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.869 [2024-07-15 21:34:50.605354] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.869 [2024-07-15 21:34:50.605367] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.869 [2024-07-15 21:34:50.605379] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.869 [2024-07-15 21:34:50.605442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.869 [2024-07-15 21:34:50.605508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.869 [2024-07-15 21:34:50.605560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.869 [2024-07-15 21:34:50.605564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:00.126 [2024-07-15 21:34:50.769184] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:00.126 Malloc0 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:00.126 [2024-07-15 21:34:50.837679] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=329631 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=329632 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=329635 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:00.126 { 00:13:00.126 "params": { 00:13:00.126 "name": "Nvme$subsystem", 00:13:00.126 "trtype": "$TEST_TRANSPORT", 00:13:00.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:00.126 "adrfam": "ipv4", 00:13:00.126 "trsvcid": "$NVMF_PORT", 00:13:00.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:00.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:00.126 "hdgst": ${hdgst:-false}, 00:13:00.126 "ddgst": ${ddgst:-false} 00:13:00.126 }, 00:13:00.126 "method": "bdev_nvme_attach_controller" 00:13:00.126 } 00:13:00.126 EOF 00:13:00.126 )") 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:00.126 { 00:13:00.126 "params": { 00:13:00.126 "name": "Nvme$subsystem", 00:13:00.126 "trtype": "$TEST_TRANSPORT", 00:13:00.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:00.126 "adrfam": "ipv4", 00:13:00.126 "trsvcid": "$NVMF_PORT", 00:13:00.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:00.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:00.126 "hdgst": ${hdgst:-false}, 00:13:00.126 "ddgst": ${ddgst:-false} 00:13:00.126 }, 00:13:00.126 "method": "bdev_nvme_attach_controller" 00:13:00.126 } 00:13:00.126 EOF 00:13:00.126 )") 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=329637 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:00.126 { 00:13:00.126 "params": { 00:13:00.126 "name": "Nvme$subsystem", 00:13:00.126 "trtype": "$TEST_TRANSPORT", 00:13:00.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:00.126 "adrfam": "ipv4", 00:13:00.126 "trsvcid": "$NVMF_PORT", 00:13:00.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:00.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:00.126 "hdgst": ${hdgst:-false}, 00:13:00.126 "ddgst": ${ddgst:-false} 00:13:00.126 }, 00:13:00.126 "method": "bdev_nvme_attach_controller" 00:13:00.126 } 00:13:00.126 EOF 00:13:00.126 )") 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:00.126 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:00.127 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:00.127 { 00:13:00.127 "params": { 00:13:00.127 "name": "Nvme$subsystem", 00:13:00.127 "trtype": "$TEST_TRANSPORT", 00:13:00.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:00.127 "adrfam": "ipv4", 00:13:00.127 "trsvcid": "$NVMF_PORT", 00:13:00.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:00.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:00.127 "hdgst": ${hdgst:-false}, 00:13:00.127 "ddgst": ${ddgst:-false} 00:13:00.127 }, 00:13:00.127 "method": "bdev_nvme_attach_controller" 00:13:00.127 } 00:13:00.127 EOF 00:13:00.127 )") 00:13:00.127 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:00.127 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 329631 00:13:00.127 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:00.127 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:00.127 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:00.127 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:00.127 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:00.127 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:00.127 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:00.127 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:00.127 "params": { 00:13:00.127 "name": "Nvme1", 00:13:00.127 "trtype": "tcp", 00:13:00.127 "traddr": "10.0.0.2", 00:13:00.127 "adrfam": "ipv4", 00:13:00.127 "trsvcid": "4420", 00:13:00.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:00.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:00.127 "hdgst": false, 00:13:00.127 "ddgst": false 00:13:00.127 }, 00:13:00.127 "method": "bdev_nvme_attach_controller" 00:13:00.127 }' 00:13:00.127 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:00.127 "params": { 00:13:00.127 "name": "Nvme1", 00:13:00.127 "trtype": "tcp", 00:13:00.127 "traddr": "10.0.0.2", 00:13:00.127 "adrfam": "ipv4", 00:13:00.127 "trsvcid": "4420", 00:13:00.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:00.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:00.127 "hdgst": false, 00:13:00.127 "ddgst": false 00:13:00.127 }, 00:13:00.127 "method": "bdev_nvme_attach_controller" 00:13:00.127 }' 00:13:00.127 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:00.127 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:00.127 "params": { 00:13:00.127 "name": "Nvme1", 00:13:00.127 "trtype": "tcp", 00:13:00.127 "traddr": "10.0.0.2", 00:13:00.127 "adrfam": "ipv4", 00:13:00.127 "trsvcid": "4420", 00:13:00.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:00.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:00.127 "hdgst": false, 00:13:00.127 "ddgst": false 00:13:00.127 }, 00:13:00.127 "method": "bdev_nvme_attach_controller" 00:13:00.127 }' 00:13:00.127 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:00.127 21:34:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:00.127 "params": { 00:13:00.127 "name": "Nvme1", 00:13:00.127 "trtype": "tcp", 00:13:00.127 "traddr": "10.0.0.2", 00:13:00.127 "adrfam": "ipv4", 00:13:00.127 "trsvcid": "4420", 00:13:00.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:00.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:00.127 "hdgst": false, 00:13:00.127 "ddgst": false 00:13:00.127 }, 00:13:00.127 "method": "bdev_nvme_attach_controller" 00:13:00.127 }' 00:13:00.127 [2024-07-15 21:34:50.889567] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:13:00.127 [2024-07-15 21:34:50.889567] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:13:00.127 [2024-07-15 21:34:50.889569] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:13:00.127 [2024-07-15 21:34:50.889670] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 21:34:50.889671] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 21:34:50.889671] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:00.127 --proc-type=auto ] 00:13:00.127 --proc-type=auto ] 00:13:00.127 [2024-07-15 21:34:50.891467] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:13:00.127 [2024-07-15 21:34:50.891548] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:00.384 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.384 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.384 [2024-07-15 21:34:51.027765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.384 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.384 [2024-07-15 21:34:51.096830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.384 [2024-07-15 21:34:51.113307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:00.384 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.384 [2024-07-15 21:34:51.165975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.640 [2024-07-15 21:34:51.184228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:00.640 [2024-07-15 21:34:51.224497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.640 [2024-07-15 21:34:51.254527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:00.640 [2024-07-15 21:34:51.311289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:13:00.640 Running I/O for 1 seconds... 00:13:00.640 Running I/O for 1 seconds... 00:13:00.897 Running I/O for 1 seconds... 00:13:00.897 Running I/O for 1 seconds... 00:13:01.832 00:13:01.832 Latency(us) 00:13:01.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.832 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:01.833 Nvme1n1 : 1.01 11836.04 46.23 0.00 0.00 10771.12 4296.25 23690.05 00:13:01.833 =================================================================================================================== 00:13:01.833 Total : 11836.04 46.23 0.00 0.00 10771.12 4296.25 23690.05 00:13:01.833 00:13:01.833 Latency(us) 00:13:01.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.833 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:01.833 Nvme1n1 : 1.02 5921.71 23.13 0.00 0.00 21476.76 6553.60 35340.89 00:13:01.833 =================================================================================================================== 00:13:01.833 Total : 5921.71 23.13 0.00 0.00 21476.76 6553.60 35340.89 00:13:01.833 00:13:01.833 Latency(us) 00:13:01.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.833 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:01.833 Nvme1n1 : 1.00 187574.54 732.71 0.00 0.00 679.45 280.65 867.75 00:13:01.833 =================================================================================================================== 00:13:01.833 Total : 187574.54 732.71 0.00 0.00 679.45 280.65 867.75 00:13:01.833 00:13:01.833 Latency(us) 00:13:01.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.833 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:01.833 Nvme1n1 : 1.01 6331.92 24.73 0.00 0.00 20141.36 5995.33 52817.16 00:13:01.833 =================================================================================================================== 00:13:01.833 Total : 6331.92 24.73 0.00 0.00 20141.36 5995.33 52817.16 00:13:02.092 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 329632 00:13:02.092 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 329635 00:13:02.092 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 329637 00:13:02.092 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.092 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.092 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:02.092 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.092 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:02.092 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:02.092 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:02.092 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:13:02.092 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:02.092 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:13:02.092 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:02.092 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:02.092 rmmod nvme_tcp 00:13:02.092 rmmod nvme_fabrics 00:13:02.092 rmmod nvme_keyring 00:13:02.092 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:02.351 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:13:02.351 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:13:02.351 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 329603 ']' 00:13:02.351 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 329603 00:13:02.351 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 329603 ']' 00:13:02.351 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 329603 00:13:02.351 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:13:02.351 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:02.351 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 329603 00:13:02.351 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:02.351 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:02.351 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 329603' 00:13:02.351 killing process with pid 329603 00:13:02.351 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 329603 00:13:02.351 21:34:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 329603 00:13:02.351 21:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:02.351 21:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:02.351 21:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:02.351 21:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:02.351 21:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:02.351 21:34:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.351 21:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.351 21:34:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.887 21:34:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:04.887 00:13:04.887 real 0m6.688s 00:13:04.887 user 0m15.686s 00:13:04.887 sys 0m3.083s 00:13:04.887 21:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:04.887 21:34:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:04.887 ************************************ 00:13:04.887 END TEST nvmf_bdev_io_wait 00:13:04.887 ************************************ 00:13:04.887 21:34:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:04.887 21:34:55 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:04.887 21:34:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:04.887 21:34:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:04.887 21:34:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:04.887 ************************************ 00:13:04.887 START TEST nvmf_queue_depth 00:13:04.887 ************************************ 00:13:04.887 21:34:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:04.887 * Looking for test storage... 00:13:04.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.887 21:34:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.887 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:04.887 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.887 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.887 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.887 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.887 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.887 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.887 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.887 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.887 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.887 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:13:04.888 21:34:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:06.269 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:06.269 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:13:06.269 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:06.269 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:06.269 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:06.269 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:13:06.270 Found 0000:08:00.0 (0x8086 - 0x159b) 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:13:06.270 Found 0000:08:00.1 (0x8086 - 0x159b) 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:13:06.270 Found net devices under 0000:08:00.0: cvl_0_0 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:13:06.270 Found net devices under 0000:08:00.1: cvl_0_1 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:06.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:13:06.270 00:13:06.270 --- 10.0.0.2 ping statistics --- 00:13:06.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.270 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:13:06.270 21:34:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:06.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:13:06.270 00:13:06.270 --- 10.0.0.1 ping statistics --- 00:13:06.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.270 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:13:06.270 21:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.270 21:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:13:06.270 21:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:06.270 21:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.270 21:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:06.270 21:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:06.270 21:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.270 21:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:06.270 21:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:06.270 21:34:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:06.270 21:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:06.270 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:06.270 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:06.270 21:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=331343 00:13:06.270 21:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:06.270 21:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 331343 00:13:06.270 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 331343 ']' 00:13:06.270 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.270 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:06.270 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.270 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:06.270 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:06.529 [2024-07-15 21:34:57.091801] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:13:06.529 [2024-07-15 21:34:57.091903] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.530 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.530 [2024-07-15 21:34:57.157234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.530 [2024-07-15 21:34:57.275522] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.530 [2024-07-15 21:34:57.275578] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.530 [2024-07-15 21:34:57.275594] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.530 [2024-07-15 21:34:57.275608] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.530 [2024-07-15 21:34:57.275621] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.530 [2024-07-15 21:34:57.275658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:06.789 [2024-07-15 21:34:57.411794] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:06.789 Malloc0 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:06.789 [2024-07-15 21:34:57.480096] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=331378 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 331378 /var/tmp/bdevperf.sock 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 331378 ']' 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:06.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:06.789 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:06.789 [2024-07-15 21:34:57.530723] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:13:06.789 [2024-07-15 21:34:57.530820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid331378 ] 00:13:06.789 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.048 [2024-07-15 21:34:57.592163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.048 [2024-07-15 21:34:57.691731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.048 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:07.048 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:13:07.048 21:34:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:07.048 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.048 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:07.309 NVMe0n1 00:13:07.309 21:34:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.309 21:34:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:07.309 Running I/O for 10 seconds... 00:13:19.528 00:13:19.528 Latency(us) 00:13:19.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.528 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:19.528 Verification LBA range: start 0x0 length 0x4000 00:13:19.528 NVMe0n1 : 10.08 9506.01 37.13 0.00 0.00 107234.07 24175.50 64468.01 00:13:19.528 =================================================================================================================== 00:13:19.528 Total : 9506.01 37.13 0.00 0.00 107234.07 24175.50 64468.01 00:13:19.528 0 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 331378 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 331378 ']' 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 331378 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 331378 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 331378' 00:13:19.528 killing process with pid 331378 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 331378 00:13:19.528 Received shutdown signal, test time was about 10.000000 seconds 00:13:19.528 00:13:19.528 Latency(us) 00:13:19.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.528 =================================================================================================================== 00:13:19.528 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 331378 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:19.528 rmmod nvme_tcp 00:13:19.528 rmmod nvme_fabrics 00:13:19.528 rmmod nvme_keyring 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 331343 ']' 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 331343 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 331343 ']' 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 331343 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 331343 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 331343' 00:13:19.528 killing process with pid 331343 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 331343 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 331343 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.528 21:35:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.094 21:35:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:20.094 00:13:20.094 real 0m15.567s 00:13:20.094 user 0m22.561s 00:13:20.094 sys 0m2.598s 00:13:20.094 21:35:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:20.094 21:35:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:20.094 ************************************ 00:13:20.094 END TEST nvmf_queue_depth 00:13:20.094 ************************************ 00:13:20.094 21:35:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:20.094 21:35:10 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:20.094 21:35:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:20.094 21:35:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:20.094 21:35:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:20.094 ************************************ 00:13:20.094 START TEST nvmf_target_multipath 00:13:20.094 ************************************ 00:13:20.094 21:35:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:20.094 * Looking for test storage... 00:13:20.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:20.094 21:35:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.094 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:20.094 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.094 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.094 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.094 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.094 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.094 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.094 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.094 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.094 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.094 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.094 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:20.094 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:20.094 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.094 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.094 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.094 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.094 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.352 21:35:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.352 21:35:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.352 21:35:10 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.352 21:35:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.352 21:35:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.352 21:35:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.352 21:35:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:13:20.353 21:35:10 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:13:22.259 Found 0000:08:00.0 (0x8086 - 0x159b) 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:13:22.259 Found 0000:08:00.1 (0x8086 - 0x159b) 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:13:22.259 Found net devices under 0000:08:00.0: cvl_0_0 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:13:22.259 Found net devices under 0000:08:00.1: cvl_0_1 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:22.259 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:22.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:13:22.259 00:13:22.259 --- 10.0.0.2 ping statistics --- 00:13:22.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.260 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:22.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:13:22.260 00:13:22.260 --- 10.0.0.1 ping statistics --- 00:13:22.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.260 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:13:22.260 only one NIC for nvmf test 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:22.260 rmmod nvme_tcp 00:13:22.260 rmmod nvme_fabrics 00:13:22.260 rmmod nvme_keyring 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:22.260 21:35:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.166 21:35:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:24.166 21:35:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:13:24.166 21:35:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:13:24.166 21:35:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:24.166 21:35:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:24.166 21:35:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:24.166 21:35:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:24.166 21:35:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:24.166 21:35:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:24.166 21:35:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:24.166 21:35:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:24.166 21:35:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:24.166 21:35:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:24.166 21:35:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:24.166 21:35:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:24.166 21:35:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:24.166 21:35:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:24.166 21:35:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:24.166 21:35:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.166 21:35:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:24.167 21:35:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.167 21:35:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:24.167 00:13:24.167 real 0m3.989s 00:13:24.167 user 0m0.695s 00:13:24.167 sys 0m1.277s 00:13:24.167 21:35:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:24.167 21:35:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:24.167 ************************************ 00:13:24.167 END TEST nvmf_target_multipath 00:13:24.167 ************************************ 00:13:24.167 21:35:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:24.167 21:35:14 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:24.167 21:35:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:24.167 21:35:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:24.167 21:35:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:24.167 ************************************ 00:13:24.167 START TEST nvmf_zcopy 00:13:24.167 ************************************ 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:24.167 * Looking for test storage... 00:13:24.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:13:24.167 21:35:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:13:26.073 Found 0000:08:00.0 (0x8086 - 0x159b) 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:26.073 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:13:26.074 Found 0000:08:00.1 (0x8086 - 0x159b) 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:13:26.074 Found net devices under 0000:08:00.0: cvl_0_0 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:13:26.074 Found net devices under 0000:08:00.1: cvl_0_1 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:26.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:13:26.074 00:13:26.074 --- 10.0.0.2 ping statistics --- 00:13:26.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.074 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:26.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:13:26.074 00:13:26.074 --- 10.0.0.1 ping statistics --- 00:13:26.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.074 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=335356 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 335356 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 335356 ']' 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:26.074 21:35:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:26.074 [2024-07-15 21:35:16.791505] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:13:26.074 [2024-07-15 21:35:16.791604] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.074 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.074 [2024-07-15 21:35:16.857170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.332 [2024-07-15 21:35:16.972231] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.332 [2024-07-15 21:35:16.972292] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.332 [2024-07-15 21:35:16.972308] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.332 [2024-07-15 21:35:16.972323] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.332 [2024-07-15 21:35:16.972335] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.332 [2024-07-15 21:35:16.972365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.332 21:35:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:26.332 21:35:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:13:26.332 21:35:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:26.332 21:35:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:26.332 21:35:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:26.332 21:35:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.332 21:35:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:26.332 21:35:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:26.332 21:35:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.332 21:35:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:26.332 [2024-07-15 21:35:17.108333] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.332 21:35:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.332 21:35:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:26.332 21:35:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.332 21:35:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:26.332 21:35:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.333 21:35:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.333 21:35:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.333 21:35:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:26.333 [2024-07-15 21:35:17.124459] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.590 21:35:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.590 21:35:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:26.590 21:35:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.590 21:35:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:26.590 21:35:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.590 21:35:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:26.590 21:35:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.590 21:35:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:26.590 malloc0 00:13:26.590 21:35:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.590 21:35:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:26.590 21:35:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.590 21:35:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:26.590 21:35:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.590 21:35:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:26.590 21:35:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:26.590 21:35:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:26.590 21:35:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:26.590 21:35:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:26.590 21:35:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:26.590 { 00:13:26.590 "params": { 00:13:26.590 "name": "Nvme$subsystem", 00:13:26.590 "trtype": "$TEST_TRANSPORT", 00:13:26.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:26.590 "adrfam": "ipv4", 00:13:26.590 "trsvcid": "$NVMF_PORT", 00:13:26.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:26.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:26.590 "hdgst": ${hdgst:-false}, 00:13:26.590 "ddgst": ${ddgst:-false} 00:13:26.590 }, 00:13:26.590 "method": "bdev_nvme_attach_controller" 00:13:26.590 } 00:13:26.590 EOF 00:13:26.590 )") 00:13:26.590 21:35:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:26.590 21:35:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:26.590 21:35:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:26.590 21:35:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:26.590 "params": { 00:13:26.590 "name": "Nvme1", 00:13:26.590 "trtype": "tcp", 00:13:26.590 "traddr": "10.0.0.2", 00:13:26.590 "adrfam": "ipv4", 00:13:26.590 "trsvcid": "4420", 00:13:26.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:26.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:26.590 "hdgst": false, 00:13:26.590 "ddgst": false 00:13:26.590 }, 00:13:26.590 "method": "bdev_nvme_attach_controller" 00:13:26.590 }' 00:13:26.590 [2024-07-15 21:35:17.206717] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:13:26.591 [2024-07-15 21:35:17.206812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335378 ] 00:13:26.591 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.591 [2024-07-15 21:35:17.268076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.848 [2024-07-15 21:35:17.386926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.107 Running I/O for 10 seconds... 00:13:37.089 00:13:37.090 Latency(us) 00:13:37.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.090 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:37.090 Verification LBA range: start 0x0 length 0x1000 00:13:37.090 Nvme1n1 : 10.02 5736.86 44.82 0.00 0.00 22250.61 3070.48 31263.10 00:13:37.090 =================================================================================================================== 00:13:37.090 Total : 5736.86 44.82 0.00 0.00 22250.61 3070.48 31263.10 00:13:37.349 21:35:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=336376 00:13:37.349 21:35:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:37.349 21:35:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:37.349 21:35:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:37.349 21:35:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:37.349 21:35:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:37.349 21:35:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:37.349 21:35:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:37.349 21:35:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:37.349 { 00:13:37.349 "params": { 00:13:37.349 "name": "Nvme$subsystem", 00:13:37.349 "trtype": "$TEST_TRANSPORT", 00:13:37.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:37.349 "adrfam": "ipv4", 00:13:37.349 "trsvcid": "$NVMF_PORT", 00:13:37.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:37.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:37.349 "hdgst": ${hdgst:-false}, 00:13:37.349 "ddgst": ${ddgst:-false} 00:13:37.349 }, 00:13:37.349 "method": "bdev_nvme_attach_controller" 00:13:37.349 } 00:13:37.349 EOF 00:13:37.349 )") 00:13:37.349 21:35:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:37.349 [2024-07-15 21:35:27.949502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.349 [2024-07-15 21:35:27.949540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.349 21:35:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:37.349 21:35:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:37.349 21:35:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:37.349 "params": { 00:13:37.349 "name": "Nvme1", 00:13:37.349 "trtype": "tcp", 00:13:37.349 "traddr": "10.0.0.2", 00:13:37.350 "adrfam": "ipv4", 00:13:37.350 "trsvcid": "4420", 00:13:37.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:37.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:37.350 "hdgst": false, 00:13:37.350 "ddgst": false 00:13:37.350 }, 00:13:37.350 "method": "bdev_nvme_attach_controller" 00:13:37.350 }' 00:13:37.350 [2024-07-15 21:35:27.957462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.350 [2024-07-15 21:35:27.957482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.350 [2024-07-15 21:35:27.965481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.350 [2024-07-15 21:35:27.965499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.350 [2024-07-15 21:35:27.973504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.350 [2024-07-15 21:35:27.973523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.350 [2024-07-15 21:35:27.981525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.350 [2024-07-15 21:35:27.981544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.350 [2024-07-15 21:35:27.989549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.350 [2024-07-15 21:35:27.989568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.350 [2024-07-15 21:35:27.992955] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:13:37.350 [2024-07-15 21:35:27.993046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid336376 ] 00:13:37.350 [2024-07-15 21:35:27.997571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.350 [2024-07-15 21:35:27.997591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.350 [2024-07-15 21:35:28.005588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.350 [2024-07-15 21:35:28.005607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.350 [2024-07-15 21:35:28.013608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.350 [2024-07-15 21:35:28.013626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.350 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.350 [2024-07-15 21:35:28.021635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.350 [2024-07-15 21:35:28.021654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.350 [2024-07-15 21:35:28.029670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.350 [2024-07-15 21:35:28.029694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.350 [2024-07-15 21:35:28.037681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.350 [2024-07-15 21:35:28.037702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.350 [2024-07-15 21:35:28.045702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.350 [2024-07-15 21:35:28.045721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.350 [2024-07-15 21:35:28.051202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.350 [2024-07-15 21:35:28.053758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.350 [2024-07-15 21:35:28.053792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.350 [2024-07-15 21:35:28.061823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.350 [2024-07-15 21:35:28.061870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.350 [2024-07-15 21:35:28.069796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.350 [2024-07-15 21:35:28.069827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.350 [2024-07-15 21:35:28.077793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.350 [2024-07-15 21:35:28.077815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.350 [2024-07-15 21:35:28.085823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.350 [2024-07-15 21:35:28.085849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.350 [2024-07-15 21:35:28.093844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.350 [2024-07-15 21:35:28.093868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.350 [2024-07-15 21:35:28.101860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.350 [2024-07-15 21:35:28.101881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.350 [2024-07-15 21:35:28.109938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.350 [2024-07-15 21:35:28.109982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.350 [2024-07-15 21:35:28.117945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.350 [2024-07-15 21:35:28.117983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.350 [2024-07-15 21:35:28.125920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.350 [2024-07-15 21:35:28.125941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.350 [2024-07-15 21:35:28.133957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.350 [2024-07-15 21:35:28.133982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.350 [2024-07-15 21:35:28.141971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.350 [2024-07-15 21:35:28.141996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.149988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.150011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.158014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.158039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.158474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.610 [2024-07-15 21:35:28.166047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.166067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.174115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.174169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.182146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.182186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.190171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.190217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.198187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.198232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.206212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.206253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.214220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.214261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.222263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.222309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.230278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.230323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.238230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.238254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.246251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.246275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.254264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.254284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.262302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.262325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.270313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.270334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.278334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.278355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.286358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.286386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.294380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.294401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.302398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.302419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.310419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.310439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.318442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.318461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.326465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.326484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.334487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.334506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.342516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.342537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.350534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.350553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.358556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.358575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.366581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.366600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.374606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.374625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.382632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.382652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.390654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.390674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.610 [2024-07-15 21:35:28.398674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.610 [2024-07-15 21:35:28.398693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.870 [2024-07-15 21:35:28.406696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.870 [2024-07-15 21:35:28.406715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.870 [2024-07-15 21:35:28.414718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.870 [2024-07-15 21:35:28.414737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.870 [2024-07-15 21:35:28.422741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.870 [2024-07-15 21:35:28.422760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.870 [2024-07-15 21:35:28.430764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.870 [2024-07-15 21:35:28.430789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.870 [2024-07-15 21:35:28.439106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.870 [2024-07-15 21:35:28.439136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.870 [2024-07-15 21:35:28.446814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.870 [2024-07-15 21:35:28.446835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.870 Running I/O for 5 seconds... 00:13:37.870 [2024-07-15 21:35:28.454833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.870 [2024-07-15 21:35:28.454853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.870 [2024-07-15 21:35:28.469040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.870 [2024-07-15 21:35:28.469066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.870 [2024-07-15 21:35:28.479104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.870 [2024-07-15 21:35:28.479131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.870 [2024-07-15 21:35:28.489322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.870 [2024-07-15 21:35:28.489347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.870 [2024-07-15 21:35:28.499428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.870 [2024-07-15 21:35:28.499453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.870 [2024-07-15 21:35:28.509647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.870 [2024-07-15 21:35:28.509672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.870 [2024-07-15 21:35:28.519781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.870 [2024-07-15 21:35:28.519806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.870 [2024-07-15 21:35:28.529885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.870 [2024-07-15 21:35:28.529909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.870 [2024-07-15 21:35:28.539583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.870 [2024-07-15 21:35:28.539607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.870 [2024-07-15 21:35:28.550028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.870 [2024-07-15 21:35:28.550053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.870 [2024-07-15 21:35:28.560287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.871 [2024-07-15 21:35:28.560311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.871 [2024-07-15 21:35:28.570444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.871 [2024-07-15 21:35:28.570471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.871 [2024-07-15 21:35:28.580468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.871 [2024-07-15 21:35:28.580491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.871 [2024-07-15 21:35:28.590504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.871 [2024-07-15 21:35:28.590540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.871 [2024-07-15 21:35:28.600468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.871 [2024-07-15 21:35:28.600492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.871 [2024-07-15 21:35:28.610269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.871 [2024-07-15 21:35:28.610293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.871 [2024-07-15 21:35:28.620175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.871 [2024-07-15 21:35:28.620199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.871 [2024-07-15 21:35:28.630370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.871 [2024-07-15 21:35:28.630402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.871 [2024-07-15 21:35:28.640158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.871 [2024-07-15 21:35:28.640187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.871 [2024-07-15 21:35:28.649777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.871 [2024-07-15 21:35:28.649801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:37.871 [2024-07-15 21:35:28.659783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:37.871 [2024-07-15 21:35:28.659807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.128 [2024-07-15 21:35:28.669912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.128 [2024-07-15 21:35:28.669936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.128 [2024-07-15 21:35:28.679829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.128 [2024-07-15 21:35:28.679853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.128 [2024-07-15 21:35:28.690024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.128 [2024-07-15 21:35:28.690056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.128 [2024-07-15 21:35:28.700054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.128 [2024-07-15 21:35:28.700078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.128 [2024-07-15 21:35:28.709985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.128 [2024-07-15 21:35:28.710009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.128 [2024-07-15 21:35:28.719872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.128 [2024-07-15 21:35:28.719896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.128 [2024-07-15 21:35:28.730365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.128 [2024-07-15 21:35:28.730399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.128 [2024-07-15 21:35:28.740903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.128 [2024-07-15 21:35:28.740928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.128 [2024-07-15 21:35:28.750897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.128 [2024-07-15 21:35:28.750921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.128 [2024-07-15 21:35:28.760878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.129 [2024-07-15 21:35:28.760902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.129 [2024-07-15 21:35:28.770838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.129 [2024-07-15 21:35:28.770862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.129 [2024-07-15 21:35:28.780860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.129 [2024-07-15 21:35:28.780884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.129 [2024-07-15 21:35:28.791158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.129 [2024-07-15 21:35:28.791181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.129 [2024-07-15 21:35:28.800939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.129 [2024-07-15 21:35:28.800964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.129 [2024-07-15 21:35:28.810737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.129 [2024-07-15 21:35:28.810761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.129 [2024-07-15 21:35:28.821048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.129 [2024-07-15 21:35:28.821071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.129 [2024-07-15 21:35:28.830784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.129 [2024-07-15 21:35:28.830807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.129 [2024-07-15 21:35:28.842608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.129 [2024-07-15 21:35:28.842631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.129 [2024-07-15 21:35:28.852319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.129 [2024-07-15 21:35:28.852343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.129 [2024-07-15 21:35:28.862167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.129 [2024-07-15 21:35:28.862190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.129 [2024-07-15 21:35:28.871950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.129 [2024-07-15 21:35:28.871973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.129 [2024-07-15 21:35:28.882270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.129 [2024-07-15 21:35:28.882294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.129 [2024-07-15 21:35:28.892533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.129 [2024-07-15 21:35:28.892557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.129 [2024-07-15 21:35:28.902389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.129 [2024-07-15 21:35:28.902413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.129 [2024-07-15 21:35:28.912573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.129 [2024-07-15 21:35:28.912597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.388 [2024-07-15 21:35:28.922278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.388 [2024-07-15 21:35:28.922302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.388 [2024-07-15 21:35:28.932293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.388 [2024-07-15 21:35:28.932317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.388 [2024-07-15 21:35:28.942236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.388 [2024-07-15 21:35:28.942260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.388 [2024-07-15 21:35:28.951898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.388 [2024-07-15 21:35:28.951922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.388 [2024-07-15 21:35:28.962328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.388 [2024-07-15 21:35:28.962352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.388 [2024-07-15 21:35:28.972249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.388 [2024-07-15 21:35:28.972273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.388 [2024-07-15 21:35:28.982751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.388 [2024-07-15 21:35:28.982775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.388 [2024-07-15 21:35:28.992979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.388 [2024-07-15 21:35:28.993003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.388 [2024-07-15 21:35:29.003248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.388 [2024-07-15 21:35:29.003274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.388 [2024-07-15 21:35:29.013336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.388 [2024-07-15 21:35:29.013361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.388 [2024-07-15 21:35:29.023442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.388 [2024-07-15 21:35:29.023466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.388 [2024-07-15 21:35:29.033508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.388 [2024-07-15 21:35:29.033532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.388 [2024-07-15 21:35:29.043583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.388 [2024-07-15 21:35:29.043607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.389 [2024-07-15 21:35:29.053481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.389 [2024-07-15 21:35:29.053506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.389 [2024-07-15 21:35:29.063255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.389 [2024-07-15 21:35:29.063279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.389 [2024-07-15 21:35:29.073539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.389 [2024-07-15 21:35:29.073563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.389 [2024-07-15 21:35:29.083274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.389 [2024-07-15 21:35:29.083299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.389 [2024-07-15 21:35:29.093586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.389 [2024-07-15 21:35:29.093611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.389 [2024-07-15 21:35:29.103703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.389 [2024-07-15 21:35:29.103726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.389 [2024-07-15 21:35:29.113712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.389 [2024-07-15 21:35:29.113750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.389 [2024-07-15 21:35:29.123874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.389 [2024-07-15 21:35:29.123897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.389 [2024-07-15 21:35:29.136939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.389 [2024-07-15 21:35:29.136963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.389 [2024-07-15 21:35:29.148102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.389 [2024-07-15 21:35:29.148125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.389 [2024-07-15 21:35:29.156751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.389 [2024-07-15 21:35:29.156775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.389 [2024-07-15 21:35:29.169044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.389 [2024-07-15 21:35:29.169074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.389 [2024-07-15 21:35:29.180537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.389 [2024-07-15 21:35:29.180560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.189303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.189327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.199846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.199879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.209992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.210028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.220003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.220027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.230330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.230353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.242534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.242559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.251660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.251684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.261904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.261936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.272186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.272210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.285432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.285456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.295059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.295083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.305155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.305178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.315352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.315383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.325098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.325121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.335097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.335120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.344895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.344918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.355018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.355049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.364710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.364734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.374491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.374514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.384880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.384905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.395112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.395152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.405646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.405671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.415879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.415904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.425982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.426007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.649 [2024-07-15 21:35:29.436217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.649 [2024-07-15 21:35:29.436242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.910 [2024-07-15 21:35:29.446678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.910 [2024-07-15 21:35:29.446703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.910 [2024-07-15 21:35:29.456620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.910 [2024-07-15 21:35:29.456644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.910 [2024-07-15 21:35:29.467108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.910 [2024-07-15 21:35:29.467134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.910 [2024-07-15 21:35:29.477437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.910 [2024-07-15 21:35:29.477463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.910 [2024-07-15 21:35:29.487984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.910 [2024-07-15 21:35:29.488009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.910 [2024-07-15 21:35:29.498669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.910 [2024-07-15 21:35:29.498695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.910 [2024-07-15 21:35:29.508978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.910 [2024-07-15 21:35:29.509004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.910 [2024-07-15 21:35:29.519309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.910 [2024-07-15 21:35:29.519334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.910 [2024-07-15 21:35:29.530918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.910 [2024-07-15 21:35:29.530944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.910 [2024-07-15 21:35:29.540902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.910 [2024-07-15 21:35:29.540927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.910 [2024-07-15 21:35:29.551113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.910 [2024-07-15 21:35:29.551147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.910 [2024-07-15 21:35:29.560958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.910 [2024-07-15 21:35:29.560982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.910 [2024-07-15 21:35:29.570963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.910 [2024-07-15 21:35:29.570988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.910 [2024-07-15 21:35:29.581058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.910 [2024-07-15 21:35:29.581083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.910 [2024-07-15 21:35:29.591321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.910 [2024-07-15 21:35:29.591355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.910 [2024-07-15 21:35:29.601750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.910 [2024-07-15 21:35:29.601775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.910 [2024-07-15 21:35:29.612016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.910 [2024-07-15 21:35:29.612041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.910 [2024-07-15 21:35:29.622299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.910 [2024-07-15 21:35:29.622324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.910 [2024-07-15 21:35:29.632596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.910 [2024-07-15 21:35:29.632621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.910 [2024-07-15 21:35:29.642308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.910 [2024-07-15 21:35:29.642333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.910 [2024-07-15 21:35:29.652285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.910 [2024-07-15 21:35:29.652309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.910 [2024-07-15 21:35:29.662449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.910 [2024-07-15 21:35:29.662474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.910 [2024-07-15 21:35:29.672768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.910 [2024-07-15 21:35:29.672792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.910 [2024-07-15 21:35:29.683201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.911 [2024-07-15 21:35:29.683226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.911 [2024-07-15 21:35:29.693606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.911 [2024-07-15 21:35:29.693632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.173 [2024-07-15 21:35:29.704208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.173 [2024-07-15 21:35:29.704233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.173 [2024-07-15 21:35:29.715033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.173 [2024-07-15 21:35:29.715058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.173 [2024-07-15 21:35:29.725198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.173 [2024-07-15 21:35:29.725222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.173 [2024-07-15 21:35:29.735435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.173 [2024-07-15 21:35:29.735458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.173 [2024-07-15 21:35:29.745497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.173 [2024-07-15 21:35:29.745521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.173 [2024-07-15 21:35:29.756084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.173 [2024-07-15 21:35:29.756108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.173 [2024-07-15 21:35:29.768090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.173 [2024-07-15 21:35:29.768121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.173 [2024-07-15 21:35:29.777366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.173 [2024-07-15 21:35:29.777390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.173 [2024-07-15 21:35:29.787580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.173 [2024-07-15 21:35:29.787611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.173 [2024-07-15 21:35:29.797837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.173 [2024-07-15 21:35:29.797867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.173 [2024-07-15 21:35:29.807706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.173 [2024-07-15 21:35:29.807730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.173 [2024-07-15 21:35:29.817557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.173 [2024-07-15 21:35:29.817581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.173 [2024-07-15 21:35:29.827950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.173 [2024-07-15 21:35:29.827983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.173 [2024-07-15 21:35:29.838233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.173 [2024-07-15 21:35:29.838258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.173 [2024-07-15 21:35:29.848497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.173 [2024-07-15 21:35:29.848521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.173 [2024-07-15 21:35:29.858013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.173 [2024-07-15 21:35:29.858037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.173 [2024-07-15 21:35:29.867918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.173 [2024-07-15 21:35:29.867942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.173 [2024-07-15 21:35:29.877704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.173 [2024-07-15 21:35:29.877729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.173 [2024-07-15 21:35:29.887783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.173 [2024-07-15 21:35:29.887807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.173 [2024-07-15 21:35:29.897676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.173 [2024-07-15 21:35:29.897700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.173 [2024-07-15 21:35:29.907414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.173 [2024-07-15 21:35:29.907438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.174 [2024-07-15 21:35:29.917229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.174 [2024-07-15 21:35:29.917253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.174 [2024-07-15 21:35:29.927191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.174 [2024-07-15 21:35:29.927215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.174 [2024-07-15 21:35:29.937314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.174 [2024-07-15 21:35:29.937338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.174 [2024-07-15 21:35:29.947573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.174 [2024-07-15 21:35:29.947596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.174 [2024-07-15 21:35:29.957314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.174 [2024-07-15 21:35:29.957338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:29.967522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:29.967547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:29.977529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:29.977561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:29.987604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:29.987628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:29.998044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:29.998068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:30.008550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:30.008582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:30.018795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:30.018828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:30.028998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:30.029033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:30.040129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:30.040167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:30.052917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:30.052958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:30.062265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:30.062290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:30.074720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:30.074744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:30.085052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:30.085077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:30.095208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:30.095233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:30.105684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:30.105709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:30.115865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:30.115891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:30.126320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:30.126345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:30.136653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:30.136678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:30.147215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:30.147240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:30.157412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:30.157438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:30.167980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:30.168005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:30.180743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:30.180775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:30.192945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:30.192971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:30.202626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:30.202651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:30.213537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:30.213561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:30.225804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:30.225832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.460 [2024-07-15 21:35:30.234941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.460 [2024-07-15 21:35:30.234965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.750 [2024-07-15 21:35:30.247432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.750 [2024-07-15 21:35:30.247456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.750 [2024-07-15 21:35:30.258864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.750 [2024-07-15 21:35:30.258888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.750 [2024-07-15 21:35:30.268222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.750 [2024-07-15 21:35:30.268256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.750 [2024-07-15 21:35:30.279300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.750 [2024-07-15 21:35:30.279324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.750 [2024-07-15 21:35:30.290936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.750 [2024-07-15 21:35:30.290960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.750 [2024-07-15 21:35:30.299794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.750 [2024-07-15 21:35:30.299818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.750 [2024-07-15 21:35:30.309356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.750 [2024-07-15 21:35:30.309380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.750 [2024-07-15 21:35:30.319506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.750 [2024-07-15 21:35:30.319530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.750 [2024-07-15 21:35:30.329589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.750 [2024-07-15 21:35:30.329621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.750 [2024-07-15 21:35:30.339983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.750 [2024-07-15 21:35:30.340007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.750 [2024-07-15 21:35:30.352708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.750 [2024-07-15 21:35:30.352750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.750 [2024-07-15 21:35:30.362642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.750 [2024-07-15 21:35:30.362667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.750 [2024-07-15 21:35:30.372591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.750 [2024-07-15 21:35:30.372615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.750 [2024-07-15 21:35:30.382431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.750 [2024-07-15 21:35:30.382455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.751 [2024-07-15 21:35:30.392185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.751 [2024-07-15 21:35:30.392209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.751 [2024-07-15 21:35:30.401880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.751 [2024-07-15 21:35:30.401903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.751 [2024-07-15 21:35:30.412224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.751 [2024-07-15 21:35:30.412248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.751 [2024-07-15 21:35:30.425199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.751 [2024-07-15 21:35:30.425223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.751 [2024-07-15 21:35:30.434943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.751 [2024-07-15 21:35:30.434967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.751 [2024-07-15 21:35:30.444594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.751 [2024-07-15 21:35:30.444618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.751 [2024-07-15 21:35:30.454333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.751 [2024-07-15 21:35:30.454357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.751 [2024-07-15 21:35:30.464375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.751 [2024-07-15 21:35:30.464398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.751 [2024-07-15 21:35:30.474472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.751 [2024-07-15 21:35:30.474495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.751 [2024-07-15 21:35:30.484322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.751 [2024-07-15 21:35:30.484346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.751 [2024-07-15 21:35:30.493959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.751 [2024-07-15 21:35:30.493982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.751 [2024-07-15 21:35:30.503979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.751 [2024-07-15 21:35:30.504003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.751 [2024-07-15 21:35:30.513804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.751 [2024-07-15 21:35:30.513828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.055 [2024-07-15 21:35:30.524432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.055 [2024-07-15 21:35:30.524457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.055 [2024-07-15 21:35:30.534811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.055 [2024-07-15 21:35:30.534835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.055 [2024-07-15 21:35:30.544748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.055 [2024-07-15 21:35:30.544772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.055 [2024-07-15 21:35:30.554836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.055 [2024-07-15 21:35:30.554860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.055 [2024-07-15 21:35:30.564915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.055 [2024-07-15 21:35:30.564940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.055 [2024-07-15 21:35:30.575182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.575210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.584934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.584958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.594907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.594930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.605261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.605285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.615477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.615505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.626559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.626583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.636248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.636272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.646244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.646268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.656317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.656341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.666020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.666044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.676000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.676024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.685882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.685906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.695955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.695978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.705905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.705929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.715971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.715996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.726365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.726391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.736616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.736642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.746749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.746774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.757152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.757177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.767183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.767207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.777528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.777553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.787873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.787899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.800196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.800222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.811442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.811467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.819938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.819963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.830818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.830843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.056 [2024-07-15 21:35:30.840915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.056 [2024-07-15 21:35:30.840940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:30.851032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:30.851058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:30.861219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:30.861244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:30.871572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:30.871598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:30.881314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:30.881338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:30.891980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:30.892005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:30.903928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:30.903953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:30.913790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:30.913824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:30.923944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:30.923969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:30.934148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:30.934173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:30.944587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:30.944612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:30.954764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:30.954798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:30.965247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:30.965272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:30.975611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:30.975637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:30.985846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:30.985879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:30.996116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:30.996149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:31.006527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:31.006552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:31.017132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:31.017165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:31.027423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:31.027448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:31.037667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:31.037693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:31.048301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:31.048327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:31.061006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:31.061031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:31.070379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:31.070404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:31.080467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:31.080490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:31.090785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:31.090809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:31.100729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:31.100753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:31.110387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:31.110411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.333 [2024-07-15 21:35:31.120338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.333 [2024-07-15 21:35:31.120362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.130282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.130306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.140302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.140326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.150301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.150332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.160327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.160352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.170085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.170117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.179980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.180003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.189865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.189889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.199830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.199854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.210114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.210158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.223060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.223083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.233083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.233107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.242888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.242912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.252632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.252655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.262303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.262327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.272322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.272345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.282373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.282397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.292529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.292553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.302274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.302298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.312182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.312206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.322250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.322274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.332013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.332037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.343735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.343768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.353077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.353101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.362856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.362881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.373017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.373041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.594 [2024-07-15 21:35:31.383329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.594 [2024-07-15 21:35:31.383353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.854 [2024-07-15 21:35:31.393458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.854 [2024-07-15 21:35:31.393483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.854 [2024-07-15 21:35:31.403668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.854 [2024-07-15 21:35:31.403692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.854 [2024-07-15 21:35:31.413954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.854 [2024-07-15 21:35:31.413978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.854 [2024-07-15 21:35:31.423964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.854 [2024-07-15 21:35:31.423988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.854 [2024-07-15 21:35:31.433908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.854 [2024-07-15 21:35:31.433931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.854 [2024-07-15 21:35:31.443780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.854 [2024-07-15 21:35:31.443811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.854 [2024-07-15 21:35:31.453983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.854 [2024-07-15 21:35:31.454007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.854 [2024-07-15 21:35:31.463969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.854 [2024-07-15 21:35:31.463997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.854 [2024-07-15 21:35:31.473938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.854 [2024-07-15 21:35:31.473962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.854 [2024-07-15 21:35:31.483925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.854 [2024-07-15 21:35:31.483949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.854 [2024-07-15 21:35:31.493770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.854 [2024-07-15 21:35:31.493794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.854 [2024-07-15 21:35:31.503488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.854 [2024-07-15 21:35:31.503511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.854 [2024-07-15 21:35:31.513331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.854 [2024-07-15 21:35:31.513355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.854 [2024-07-15 21:35:31.523214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.854 [2024-07-15 21:35:31.523238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.854 [2024-07-15 21:35:31.533265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.854 [2024-07-15 21:35:31.533296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.854 [2024-07-15 21:35:31.543717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.854 [2024-07-15 21:35:31.543743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.854 [2024-07-15 21:35:31.553828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.854 [2024-07-15 21:35:31.553852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.854 [2024-07-15 21:35:31.563884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.854 [2024-07-15 21:35:31.563907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.855 [2024-07-15 21:35:31.573886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.855 [2024-07-15 21:35:31.573912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.855 [2024-07-15 21:35:31.583744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.855 [2024-07-15 21:35:31.583776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.855 [2024-07-15 21:35:31.593706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.855 [2024-07-15 21:35:31.593729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.855 [2024-07-15 21:35:31.603404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.855 [2024-07-15 21:35:31.603427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.855 [2024-07-15 21:35:31.613248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.855 [2024-07-15 21:35:31.613272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.855 [2024-07-15 21:35:31.623392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.855 [2024-07-15 21:35:31.623416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.855 [2024-07-15 21:35:31.633459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.855 [2024-07-15 21:35:31.633483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.855 [2024-07-15 21:35:31.643515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.855 [2024-07-15 21:35:31.643539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.653548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.653572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.663587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.663611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.673583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.673613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.683372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.683396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.693238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.693261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.703067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.703091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.713445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.713469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.723746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.723781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.733582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.733606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.743675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.743699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.753726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.753750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.763786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.763809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.773551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.773574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.783381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.783404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.793524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.793548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.803398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.803422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.813172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.813195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.823117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.823155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.832914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.832937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.842564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.842587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.852211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.852235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.862101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.862125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.872263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.872287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.882376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.882403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.893880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.893903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.116 [2024-07-15 21:35:31.903641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.116 [2024-07-15 21:35:31.903664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:31.913712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:31.913742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:31.923633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:31.923657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:31.933335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:31.933358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:31.943215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:31.943239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:31.952947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:31.952971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:31.962383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:31.962406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:31.972180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:31.972204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:31.982212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:31.982236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:31.992322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:31.992347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:32.002402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:32.002426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:32.012302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:32.012342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:32.021956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:32.021979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:32.032100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:32.032125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:32.041961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:32.041986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:32.052445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:32.052470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:32.062728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:32.062752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:32.073133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:32.073168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:32.083319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:32.083343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:32.093597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:32.093622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:32.103663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:32.103687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:32.114319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:32.114344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:32.126315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:32.126339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:32.136091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:32.136116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:32.146259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:32.146283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:32.156303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:32.156327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.376 [2024-07-15 21:35:32.166041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.376 [2024-07-15 21:35:32.166065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.176021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.176047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.185983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.186007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.196074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.196097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.205873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.205897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.217549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.217573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.227446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.227470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.237346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.237370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.247488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.247512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.257201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.257225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.267383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.267406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.277052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.277075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.286741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.286765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.296498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.296522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.306416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.306441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.316431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.316454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.326282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.326306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.336248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.336272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.346159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.346183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.355996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.356019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.365760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.365783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.375431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.375455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.385209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.385233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.395265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.395289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.405087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.405112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.635 [2024-07-15 21:35:32.418646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.635 [2024-07-15 21:35:32.418677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.894 [2024-07-15 21:35:32.427692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.894 [2024-07-15 21:35:32.427717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.894 [2024-07-15 21:35:32.439973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.894 [2024-07-15 21:35:32.439997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.894 [2024-07-15 21:35:32.449871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.894 [2024-07-15 21:35:32.449896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.894 [2024-07-15 21:35:32.459768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.894 [2024-07-15 21:35:32.459798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.894 [2024-07-15 21:35:32.469779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.894 [2024-07-15 21:35:32.469803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.894 [2024-07-15 21:35:32.479656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.894 [2024-07-15 21:35:32.479688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.894 [2024-07-15 21:35:32.489743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.894 [2024-07-15 21:35:32.489768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.894 [2024-07-15 21:35:32.499835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.894 [2024-07-15 21:35:32.499860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.894 [2024-07-15 21:35:32.509756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.894 [2024-07-15 21:35:32.509780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.894 [2024-07-15 21:35:32.519527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.894 [2024-07-15 21:35:32.519551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.894 [2024-07-15 21:35:32.529643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.894 [2024-07-15 21:35:32.529667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.894 [2024-07-15 21:35:32.539478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.894 [2024-07-15 21:35:32.539502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.894 [2024-07-15 21:35:32.549273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.894 [2024-07-15 21:35:32.549300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.894 [2024-07-15 21:35:32.559598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.894 [2024-07-15 21:35:32.559622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.894 [2024-07-15 21:35:32.569809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.894 [2024-07-15 21:35:32.569834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.894 [2024-07-15 21:35:32.579800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.894 [2024-07-15 21:35:32.579825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.894 [2024-07-15 21:35:32.590086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.894 [2024-07-15 21:35:32.590110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.894 [2024-07-15 21:35:32.599980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.894 [2024-07-15 21:35:32.600004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.895 [2024-07-15 21:35:32.609984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.895 [2024-07-15 21:35:32.610008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.895 [2024-07-15 21:35:32.619850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.895 [2024-07-15 21:35:32.619874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.895 [2024-07-15 21:35:32.629700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.895 [2024-07-15 21:35:32.629724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.895 [2024-07-15 21:35:32.639931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.895 [2024-07-15 21:35:32.639955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.895 [2024-07-15 21:35:32.651654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.895 [2024-07-15 21:35:32.651684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.895 [2024-07-15 21:35:32.661082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.895 [2024-07-15 21:35:32.661106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.895 [2024-07-15 21:35:32.670886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.895 [2024-07-15 21:35:32.670926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.895 [2024-07-15 21:35:32.680965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.895 [2024-07-15 21:35:32.680988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.691202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.691227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.700843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.700867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.713370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.713394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.722800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.722824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.735149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.735173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.746228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.746251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.754568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.754591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.766571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.766595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.776048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.776071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.785806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.785829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.795721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.795744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.805785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.805809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.816384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.816409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.828709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.828733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.837663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.837687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.849502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.849526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.859278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.859302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.869707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.869741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.879913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.879937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.890045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.890068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.899861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.899884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.909936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.909960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.919989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.920013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.929858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.929882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.155 [2024-07-15 21:35:32.939916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.155 [2024-07-15 21:35:32.939939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:32.949831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:32.949855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:32.959899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:32.959923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:32.969928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:32.969952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:32.979768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:32.979792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:32.990173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:32.990209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:33.000221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:33.000245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:33.010226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:33.010250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:33.021323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:33.021349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:33.030066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:33.030090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:33.042192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:33.042216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:33.054152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:33.054180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:33.063214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:33.063247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:33.074500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:33.074525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:33.086697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:33.086741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:33.096369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:33.096392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:33.106224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:33.106248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:33.116199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:33.116223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:33.126234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:33.126259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:33.136450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:33.136475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:33.146538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:33.146563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:33.156922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:33.156946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:33.167273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:33.167298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:33.179668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:33.179692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:33.189416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:33.189440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.415 [2024-07-15 21:35:33.199510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.415 [2024-07-15 21:35:33.199534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.674 [2024-07-15 21:35:33.209650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.674 [2024-07-15 21:35:33.209674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.674 [2024-07-15 21:35:33.219632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.674 [2024-07-15 21:35:33.219656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.674 [2024-07-15 21:35:33.229449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.674 [2024-07-15 21:35:33.229473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.674 [2024-07-15 21:35:33.239040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.674 [2024-07-15 21:35:33.239064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.674 [2024-07-15 21:35:33.248914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.674 [2024-07-15 21:35:33.248937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.674 [2024-07-15 21:35:33.259055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.674 [2024-07-15 21:35:33.259093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.674 [2024-07-15 21:35:33.269177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.674 [2024-07-15 21:35:33.269201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.674 [2024-07-15 21:35:33.278867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.674 [2024-07-15 21:35:33.278891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.674 [2024-07-15 21:35:33.290554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.674 [2024-07-15 21:35:33.290578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.674 [2024-07-15 21:35:33.299757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.674 [2024-07-15 21:35:33.299781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.674 [2024-07-15 21:35:33.309626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.674 [2024-07-15 21:35:33.309650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.674 [2024-07-15 21:35:33.319760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.674 [2024-07-15 21:35:33.319790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.674 [2024-07-15 21:35:33.329882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.674 [2024-07-15 21:35:33.329906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.674 [2024-07-15 21:35:33.339561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.674 [2024-07-15 21:35:33.339584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.674 [2024-07-15 21:35:33.349687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.674 [2024-07-15 21:35:33.349711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.674 [2024-07-15 21:35:33.359436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.674 [2024-07-15 21:35:33.359460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.674 [2024-07-15 21:35:33.369477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.674 [2024-07-15 21:35:33.369501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.674 [2024-07-15 21:35:33.379417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.674 [2024-07-15 21:35:33.379445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.674 [2024-07-15 21:35:33.389449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.674 [2024-07-15 21:35:33.389472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.674 [2024-07-15 21:35:33.399439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.674 [2024-07-15 21:35:33.399462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.674 [2024-07-15 21:35:33.409269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.674 [2024-07-15 21:35:33.409293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.674 [2024-07-15 21:35:33.419345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.674 [2024-07-15 21:35:33.419369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.674 [2024-07-15 21:35:33.429584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.674 [2024-07-15 21:35:33.429610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.674 [2024-07-15 21:35:33.440049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.675 [2024-07-15 21:35:33.440074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.675 [2024-07-15 21:35:33.450329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.675 [2024-07-15 21:35:33.450354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.675 [2024-07-15 21:35:33.460665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.675 [2024-07-15 21:35:33.460690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.934 [2024-07-15 21:35:33.469968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.934 [2024-07-15 21:35:33.469993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.934 00:13:42.934 Latency(us) 00:13:42.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.934 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:42.934 Nvme1n1 : 5.01 12614.89 98.55 0.00 0.00 10133.82 4587.52 22913.33 00:13:42.934 =================================================================================================================== 00:13:42.934 Total : 12614.89 98.55 0.00 0.00 10133.82 4587.52 22913.33 00:13:42.934 [2024-07-15 21:35:33.477286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.934 [2024-07-15 21:35:33.477310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.934 [2024-07-15 21:35:33.485308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.934 [2024-07-15 21:35:33.485332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.934 [2024-07-15 21:35:33.493408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.934 [2024-07-15 21:35:33.493463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.934 [2024-07-15 21:35:33.501428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.934 [2024-07-15 21:35:33.501485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.934 [2024-07-15 21:35:33.509444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.934 [2024-07-15 21:35:33.509498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.934 [2024-07-15 21:35:33.517460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.934 [2024-07-15 21:35:33.517506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.934 [2024-07-15 21:35:33.525497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.934 [2024-07-15 21:35:33.525552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.935 [2024-07-15 21:35:33.533517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.935 [2024-07-15 21:35:33.533574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.935 [2024-07-15 21:35:33.541540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.935 [2024-07-15 21:35:33.541587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.935 [2024-07-15 21:35:33.549556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.935 [2024-07-15 21:35:33.549610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.935 [2024-07-15 21:35:33.557552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.935 [2024-07-15 21:35:33.557596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.935 [2024-07-15 21:35:33.565534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.935 [2024-07-15 21:35:33.565558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.935 [2024-07-15 21:35:33.573563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.935 [2024-07-15 21:35:33.573591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.935 [2024-07-15 21:35:33.581589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.935 [2024-07-15 21:35:33.581620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.935 [2024-07-15 21:35:33.589655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.935 [2024-07-15 21:35:33.589711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.935 [2024-07-15 21:35:33.597673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.935 [2024-07-15 21:35:33.597725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.935 [2024-07-15 21:35:33.605675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.935 [2024-07-15 21:35:33.605715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.935 [2024-07-15 21:35:33.613669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.935 [2024-07-15 21:35:33.613697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.935 [2024-07-15 21:35:33.621690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.935 [2024-07-15 21:35:33.621716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.935 [2024-07-15 21:35:33.629708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.935 [2024-07-15 21:35:33.629734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.935 [2024-07-15 21:35:33.637783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.935 [2024-07-15 21:35:33.637837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.935 [2024-07-15 21:35:33.645789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.935 [2024-07-15 21:35:33.645827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.935 [2024-07-15 21:35:33.653758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.935 [2024-07-15 21:35:33.653778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.935 [2024-07-15 21:35:33.661781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.935 [2024-07-15 21:35:33.661801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (336376) - No such process 00:13:42.935 21:35:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 336376 00:13:42.935 21:35:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.935 21:35:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.935 21:35:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:42.935 21:35:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.935 21:35:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:42.935 21:35:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.935 21:35:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:42.935 delay0 00:13:42.935 21:35:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.935 21:35:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:42.935 21:35:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.935 21:35:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:42.935 21:35:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.935 21:35:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:42.935 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.194 [2024-07-15 21:35:33.821328] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:49.769 Initializing NVMe Controllers 00:13:49.769 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:49.769 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:49.769 Initialization complete. Launching workers. 00:13:49.769 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 105 00:13:49.769 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 392, failed to submit 33 00:13:49.769 success 216, unsuccess 176, failed 0 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:49.769 rmmod nvme_tcp 00:13:49.769 rmmod nvme_fabrics 00:13:49.769 rmmod nvme_keyring 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 335356 ']' 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 335356 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 335356 ']' 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 335356 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 335356 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 335356' 00:13:49.769 killing process with pid 335356 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 335356 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 335356 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.769 21:35:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.678 21:35:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:51.678 00:13:51.678 real 0m27.502s 00:13:51.678 user 0m40.741s 00:13:51.678 sys 0m7.520s 00:13:51.678 21:35:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:51.678 21:35:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:51.678 ************************************ 00:13:51.678 END TEST nvmf_zcopy 00:13:51.678 ************************************ 00:13:51.678 21:35:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:51.678 21:35:42 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:51.678 21:35:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:51.678 21:35:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:51.678 21:35:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:51.678 ************************************ 00:13:51.678 START TEST nvmf_nmic 00:13:51.678 ************************************ 00:13:51.678 21:35:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:51.678 * Looking for test storage... 00:13:51.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.937 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.938 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:51.938 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:51.938 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:51.938 21:35:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:51.938 21:35:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:51.938 21:35:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:51.938 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:51.938 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.938 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:51.938 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:51.938 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:51.938 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.938 21:35:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.938 21:35:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.938 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:51.938 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:51.938 21:35:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:51.938 21:35:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:13:53.844 Found 0000:08:00.0 (0x8086 - 0x159b) 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:13:53.844 Found 0000:08:00.1 (0x8086 - 0x159b) 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:13:53.844 Found net devices under 0000:08:00.0: cvl_0_0 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.844 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:13:53.845 Found net devices under 0000:08:00.1: cvl_0_1 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:53.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:13:53.845 00:13:53.845 --- 10.0.0.2 ping statistics --- 00:13:53.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.845 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:53.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:13:53.845 00:13:53.845 --- 10.0.0.1 ping statistics --- 00:13:53.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.845 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=338978 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 338978 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 338978 ']' 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:53.845 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:53.845 [2024-07-15 21:35:44.404934] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:13:53.845 [2024-07-15 21:35:44.405026] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.845 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.845 [2024-07-15 21:35:44.470301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:53.845 [2024-07-15 21:35:44.588294] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.845 [2024-07-15 21:35:44.588350] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.845 [2024-07-15 21:35:44.588366] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.845 [2024-07-15 21:35:44.588379] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.845 [2024-07-15 21:35:44.588392] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.845 [2024-07-15 21:35:44.588497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.845 [2024-07-15 21:35:44.588630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.845 [2024-07-15 21:35:44.589441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:53.845 [2024-07-15 21:35:44.589452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:54.105 [2024-07-15 21:35:44.735909] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:54.105 Malloc0 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:54.105 [2024-07-15 21:35:44.786132] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:54.105 test case1: single bdev can't be used in multiple subsystems 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:54.105 [2024-07-15 21:35:44.810036] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:54.105 [2024-07-15 21:35:44.810068] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:54.105 [2024-07-15 21:35:44.810084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.105 request: 00:13:54.105 { 00:13:54.105 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:54.105 "namespace": { 00:13:54.105 "bdev_name": "Malloc0", 00:13:54.105 "no_auto_visible": false 00:13:54.105 }, 00:13:54.105 "method": "nvmf_subsystem_add_ns", 00:13:54.105 "req_id": 1 00:13:54.105 } 00:13:54.105 Got JSON-RPC error response 00:13:54.105 response: 00:13:54.105 { 00:13:54.105 "code": -32602, 00:13:54.105 "message": "Invalid parameters" 00:13:54.105 } 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:54.105 Adding namespace failed - expected result. 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:54.105 test case2: host connect to nvmf target in multiple paths 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:54.105 [2024-07-15 21:35:44.818153] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.105 21:35:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:54.673 21:35:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:54.931 21:35:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:54.931 21:35:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:54.931 21:35:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:54.931 21:35:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:54.931 21:35:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:57.466 21:35:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:57.466 21:35:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:57.466 21:35:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:57.466 21:35:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:57.466 21:35:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:57.466 21:35:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:57.466 21:35:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:57.466 [global] 00:13:57.466 thread=1 00:13:57.466 invalidate=1 00:13:57.466 rw=write 00:13:57.466 time_based=1 00:13:57.466 runtime=1 00:13:57.466 ioengine=libaio 00:13:57.466 direct=1 00:13:57.466 bs=4096 00:13:57.466 iodepth=1 00:13:57.466 norandommap=0 00:13:57.466 numjobs=1 00:13:57.466 00:13:57.466 verify_dump=1 00:13:57.466 verify_backlog=512 00:13:57.466 verify_state_save=0 00:13:57.466 do_verify=1 00:13:57.466 verify=crc32c-intel 00:13:57.466 [job0] 00:13:57.466 filename=/dev/nvme0n1 00:13:57.466 Could not set queue depth (nvme0n1) 00:13:57.466 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:57.466 fio-3.35 00:13:57.466 Starting 1 thread 00:13:58.397 00:13:58.398 job0: (groupid=0, jobs=1): err= 0: pid=339378: Mon Jul 15 21:35:49 2024 00:13:58.398 read: IOPS=116, BW=465KiB/s (476kB/s)(480KiB/1032msec) 00:13:58.398 slat (nsec): min=8342, max=42958, avg=18187.91, stdev=6628.86 00:13:58.398 clat (usec): min=195, max=42029, avg=7724.38, stdev=15859.85 00:13:58.398 lat (usec): min=209, max=42045, avg=7742.57, stdev=15863.39 00:13:58.398 clat percentiles (usec): 00:13:58.398 | 1.00th=[ 198], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 210], 00:13:58.398 | 30.00th=[ 223], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 243], 00:13:58.398 | 70.00th=[ 253], 80.00th=[ 461], 90.00th=[41157], 95.00th=[41157], 00:13:58.398 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:58.398 | 99.99th=[42206] 00:13:58.398 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:13:58.398 slat (nsec): min=10130, max=44833, avg=19959.27, stdev=4068.61 00:13:58.398 clat (usec): min=147, max=315, avg=173.67, stdev=21.84 00:13:58.398 lat (usec): min=162, max=360, avg=193.63, stdev=23.17 00:13:58.398 clat percentiles (usec): 00:13:58.398 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 155], 20.00th=[ 159], 00:13:58.398 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:13:58.398 | 70.00th=[ 176], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 212], 00:13:58.398 | 99.00th=[ 253], 99.50th=[ 289], 99.90th=[ 318], 99.95th=[ 318], 00:13:58.398 | 99.99th=[ 318] 00:13:58.398 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:58.398 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:58.398 lat (usec) : 250=92.41%, 500=4.11% 00:13:58.398 lat (msec) : 50=3.48% 00:13:58.398 cpu : usr=1.36%, sys=1.16%, ctx=632, majf=0, minf=1 00:13:58.398 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:58.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.398 issued rwts: total=120,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.398 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:58.398 00:13:58.398 Run status group 0 (all jobs): 00:13:58.398 READ: bw=465KiB/s (476kB/s), 465KiB/s-465KiB/s (476kB/s-476kB/s), io=480KiB (492kB), run=1032-1032msec 00:13:58.398 WRITE: bw=1984KiB/s (2032kB/s), 1984KiB/s-1984KiB/s (2032kB/s-2032kB/s), io=2048KiB (2097kB), run=1032-1032msec 00:13:58.398 00:13:58.398 Disk stats (read/write): 00:13:58.398 nvme0n1: ios=166/512, merge=0/0, ticks=786/86, in_queue=872, util=91.88% 00:13:58.398 21:35:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:58.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:58.656 rmmod nvme_tcp 00:13:58.656 rmmod nvme_fabrics 00:13:58.656 rmmod nvme_keyring 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 338978 ']' 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 338978 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 338978 ']' 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 338978 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 338978 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 338978' 00:13:58.656 killing process with pid 338978 00:13:58.656 21:35:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 338978 00:13:58.657 21:35:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 338978 00:13:58.916 21:35:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:58.916 21:35:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:58.916 21:35:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:58.916 21:35:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:58.916 21:35:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:58.916 21:35:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.916 21:35:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.916 21:35:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.823 21:35:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:00.823 00:14:00.823 real 0m9.191s 00:14:00.823 user 0m20.500s 00:14:00.823 sys 0m2.098s 00:14:00.823 21:35:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:00.823 21:35:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:00.823 ************************************ 00:14:00.823 END TEST nvmf_nmic 00:14:00.823 ************************************ 00:14:01.081 21:35:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:01.081 21:35:51 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:01.081 21:35:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:01.081 21:35:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:01.081 21:35:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:01.081 ************************************ 00:14:01.081 START TEST nvmf_fio_target 00:14:01.081 ************************************ 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:01.081 * Looking for test storage... 00:14:01.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:01.081 21:35:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.987 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.987 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:02.987 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:02.987 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:02.987 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:02.987 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:14:02.988 Found 0000:08:00.0 (0x8086 - 0x159b) 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:14:02.988 Found 0000:08:00.1 (0x8086 - 0x159b) 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:14:02.988 Found net devices under 0000:08:00.0: cvl_0_0 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:14:02.988 Found net devices under 0000:08:00.1: cvl_0_1 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:02.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:14:02.988 00:14:02.988 --- 10.0.0.2 ping statistics --- 00:14:02.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.988 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:14:02.988 00:14:02.988 --- 10.0.0.1 ping statistics --- 00:14:02.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.988 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=340979 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 340979 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 340979 ']' 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:02.988 21:35:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.988 [2024-07-15 21:35:53.708641] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:14:02.988 [2024-07-15 21:35:53.708748] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.988 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.988 [2024-07-15 21:35:53.774630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:03.246 [2024-07-15 21:35:53.895729] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.246 [2024-07-15 21:35:53.895784] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.246 [2024-07-15 21:35:53.895800] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.246 [2024-07-15 21:35:53.895813] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.246 [2024-07-15 21:35:53.895825] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.246 [2024-07-15 21:35:53.895918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.246 [2024-07-15 21:35:53.895971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.246 [2024-07-15 21:35:53.896020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:03.246 [2024-07-15 21:35:53.896023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.246 21:35:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:03.246 21:35:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:14:03.246 21:35:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:03.246 21:35:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:03.246 21:35:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.246 21:35:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.246 21:35:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:03.813 [2024-07-15 21:35:54.314800] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.813 21:35:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:04.071 21:35:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:04.071 21:35:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:04.330 21:35:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:04.330 21:35:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:04.588 21:35:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:04.588 21:35:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:04.845 21:35:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:04.845 21:35:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:05.103 21:35:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:05.670 21:35:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:05.670 21:35:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:05.928 21:35:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:05.928 21:35:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:06.185 21:35:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:06.186 21:35:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:06.443 21:35:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:06.701 21:35:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:06.701 21:35:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:06.958 21:35:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:06.958 21:35:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:07.216 21:35:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:07.473 [2024-07-15 21:35:58.149315] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.473 21:35:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:07.730 21:35:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:07.988 21:35:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:08.553 21:35:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:08.553 21:35:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:14:08.553 21:35:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:08.553 21:35:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:14:08.553 21:35:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:14:08.553 21:35:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:14:10.452 21:36:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:10.452 21:36:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:10.453 21:36:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:10.453 21:36:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:14:10.453 21:36:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:10.453 21:36:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:14:10.453 21:36:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:10.453 [global] 00:14:10.453 thread=1 00:14:10.453 invalidate=1 00:14:10.453 rw=write 00:14:10.453 time_based=1 00:14:10.453 runtime=1 00:14:10.453 ioengine=libaio 00:14:10.453 direct=1 00:14:10.453 bs=4096 00:14:10.453 iodepth=1 00:14:10.453 norandommap=0 00:14:10.453 numjobs=1 00:14:10.453 00:14:10.453 verify_dump=1 00:14:10.453 verify_backlog=512 00:14:10.453 verify_state_save=0 00:14:10.453 do_verify=1 00:14:10.453 verify=crc32c-intel 00:14:10.453 [job0] 00:14:10.453 filename=/dev/nvme0n1 00:14:10.453 [job1] 00:14:10.453 filename=/dev/nvme0n2 00:14:10.453 [job2] 00:14:10.453 filename=/dev/nvme0n3 00:14:10.453 [job3] 00:14:10.453 filename=/dev/nvme0n4 00:14:10.453 Could not set queue depth (nvme0n1) 00:14:10.453 Could not set queue depth (nvme0n2) 00:14:10.453 Could not set queue depth (nvme0n3) 00:14:10.453 Could not set queue depth (nvme0n4) 00:14:10.710 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:10.710 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:10.710 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:10.710 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:10.710 fio-3.35 00:14:10.710 Starting 4 threads 00:14:12.086 00:14:12.086 job0: (groupid=0, jobs=1): err= 0: pid=341879: Mon Jul 15 21:36:02 2024 00:14:12.086 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:14:12.086 slat (nsec): min=6282, max=44957, avg=15259.70, stdev=4112.15 00:14:12.086 clat (usec): min=187, max=41989, avg=1578.05, stdev=7140.08 00:14:12.086 lat (usec): min=195, max=42010, avg=1593.31, stdev=7140.20 00:14:12.086 clat percentiles (usec): 00:14:12.086 | 1.00th=[ 194], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 217], 00:14:12.086 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 237], 00:14:12.086 | 70.00th=[ 255], 80.00th=[ 355], 90.00th=[ 429], 95.00th=[ 486], 00:14:12.086 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:14:12.086 | 99.99th=[42206] 00:14:12.086 write: IOPS=848, BW=3393KiB/s (3474kB/s)(3396KiB/1001msec); 0 zone resets 00:14:12.086 slat (nsec): min=7578, max=44856, avg=16358.73, stdev=6567.45 00:14:12.086 clat (usec): min=133, max=393, avg=193.24, stdev=41.27 00:14:12.086 lat (usec): min=140, max=401, avg=209.60, stdev=41.03 00:14:12.086 clat percentiles (usec): 00:14:12.086 | 1.00th=[ 141], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:14:12.086 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 182], 60.00th=[ 196], 00:14:12.086 | 70.00th=[ 212], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 265], 00:14:12.086 | 99.00th=[ 314], 99.50th=[ 343], 99.90th=[ 392], 99.95th=[ 392], 00:14:12.086 | 99.99th=[ 392] 00:14:12.086 bw ( KiB/s): min= 4096, max= 4096, per=26.95%, avg=4096.00, stdev= 0.00, samples=1 00:14:12.086 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:12.087 lat (usec) : 250=82.81%, 500=15.50%, 750=0.37% 00:14:12.087 lat (msec) : 2=0.07%, 20=0.07%, 50=1.18% 00:14:12.087 cpu : usr=1.90%, sys=2.80%, ctx=1361, majf=0, minf=1 00:14:12.087 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:12.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.087 issued rwts: total=512,849,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:12.087 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:12.087 job1: (groupid=0, jobs=1): err= 0: pid=341886: Mon Jul 15 21:36:02 2024 00:14:12.087 read: IOPS=829, BW=3318KiB/s (3397kB/s)(3404KiB/1026msec) 00:14:12.087 slat (nsec): min=6330, max=55525, avg=10715.10, stdev=5583.12 00:14:12.087 clat (usec): min=166, max=41982, avg=944.76, stdev=5220.75 00:14:12.087 lat (usec): min=173, max=42012, avg=955.48, stdev=5221.94 00:14:12.087 clat percentiles (usec): 00:14:12.087 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 196], 00:14:12.087 | 30.00th=[ 225], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 253], 00:14:12.087 | 70.00th=[ 285], 80.00th=[ 322], 90.00th=[ 416], 95.00th=[ 490], 00:14:12.087 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:14:12.087 | 99.99th=[42206] 00:14:12.087 write: IOPS=998, BW=3992KiB/s (4088kB/s)(4096KiB/1026msec); 0 zone resets 00:14:12.087 slat (usec): min=8, max=516, avg=12.55, stdev=22.20 00:14:12.087 clat (usec): min=121, max=500, avg=188.70, stdev=53.59 00:14:12.087 lat (usec): min=129, max=823, avg=201.25, stdev=61.61 00:14:12.087 clat percentiles (usec): 00:14:12.087 | 1.00th=[ 126], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 137], 00:14:12.087 | 30.00th=[ 143], 40.00th=[ 159], 50.00th=[ 180], 60.00th=[ 198], 00:14:12.087 | 70.00th=[ 227], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 269], 00:14:12.087 | 99.00th=[ 351], 99.50th=[ 375], 99.90th=[ 441], 99.95th=[ 502], 00:14:12.087 | 99.99th=[ 502] 00:14:12.087 bw ( KiB/s): min= 4096, max= 4096, per=26.95%, avg=4096.00, stdev= 0.00, samples=2 00:14:12.087 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:14:12.087 lat (usec) : 250=75.04%, 500=23.04%, 750=1.12% 00:14:12.087 lat (msec) : 4=0.05%, 50=0.75% 00:14:12.087 cpu : usr=1.37%, sys=3.02%, ctx=1884, majf=0, minf=1 00:14:12.087 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:12.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.087 issued rwts: total=851,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:12.087 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:12.087 job2: (groupid=0, jobs=1): err= 0: pid=341887: Mon Jul 15 21:36:02 2024 00:14:12.087 read: IOPS=508, BW=2035KiB/s (2084kB/s)(2100KiB/1032msec) 00:14:12.087 slat (nsec): min=5598, max=39725, avg=9859.00, stdev=5686.15 00:14:12.087 clat (usec): min=161, max=41098, avg=1552.92, stdev=7180.45 00:14:12.087 lat (usec): min=167, max=41107, avg=1562.78, stdev=7182.33 00:14:12.087 clat percentiles (usec): 00:14:12.087 | 1.00th=[ 172], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 192], 00:14:12.087 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 221], 00:14:12.087 | 70.00th=[ 249], 80.00th=[ 269], 90.00th=[ 302], 95.00th=[ 490], 00:14:12.087 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:12.087 | 99.99th=[41157] 00:14:12.087 write: IOPS=992, BW=3969KiB/s (4064kB/s)(4096KiB/1032msec); 0 zone resets 00:14:12.087 slat (nsec): min=7088, max=43058, avg=11162.55, stdev=6450.45 00:14:12.087 clat (usec): min=124, max=814, avg=190.53, stdev=55.78 00:14:12.087 lat (usec): min=132, max=824, avg=201.69, stdev=58.93 00:14:12.087 clat percentiles (usec): 00:14:12.087 | 1.00th=[ 129], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 137], 00:14:12.087 | 30.00th=[ 143], 40.00th=[ 165], 50.00th=[ 180], 60.00th=[ 204], 00:14:12.087 | 70.00th=[ 229], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 265], 00:14:12.087 | 99.00th=[ 338], 99.50th=[ 359], 99.90th=[ 449], 99.95th=[ 816], 00:14:12.087 | 99.99th=[ 816] 00:14:12.087 bw ( KiB/s): min= 1856, max= 6336, per=26.95%, avg=4096.00, stdev=3167.84, samples=2 00:14:12.087 iops : min= 464, max= 1584, avg=1024.00, stdev=791.96, samples=2 00:14:12.087 lat (usec) : 250=82.12%, 500=16.27%, 750=0.32%, 1000=0.13% 00:14:12.087 lat (msec) : 10=0.06%, 50=1.10% 00:14:12.087 cpu : usr=0.87%, sys=2.04%, ctx=1549, majf=0, minf=1 00:14:12.087 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:12.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.087 issued rwts: total=525,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:12.087 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:12.087 job3: (groupid=0, jobs=1): err= 0: pid=341888: Mon Jul 15 21:36:02 2024 00:14:12.087 read: IOPS=508, BW=2034KiB/s (2082kB/s)(2056KiB/1011msec) 00:14:12.087 slat (nsec): min=6457, max=33756, avg=9298.60, stdev=4231.71 00:14:12.087 clat (usec): min=163, max=41139, avg=1375.71, stdev=6607.11 00:14:12.087 lat (usec): min=170, max=41146, avg=1385.00, stdev=6608.29 00:14:12.087 clat percentiles (usec): 00:14:12.087 | 1.00th=[ 194], 5.00th=[ 219], 10.00th=[ 235], 20.00th=[ 243], 00:14:12.087 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 251], 00:14:12.087 | 70.00th=[ 255], 80.00th=[ 273], 90.00th=[ 388], 95.00th=[ 498], 00:14:12.087 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:12.087 | 99.99th=[41157] 00:14:12.087 write: IOPS=1012, BW=4051KiB/s (4149kB/s)(4096KiB/1011msec); 0 zone resets 00:14:12.087 slat (usec): min=8, max=34662, avg=71.70, stdev=1234.20 00:14:12.087 clat (usec): min=138, max=2086, avg=213.77, stdev=74.61 00:14:12.087 lat (usec): min=150, max=34903, avg=285.47, stdev=1238.69 00:14:12.087 clat percentiles (usec): 00:14:12.087 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 180], 00:14:12.087 | 30.00th=[ 188], 40.00th=[ 196], 50.00th=[ 204], 60.00th=[ 215], 00:14:12.087 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 253], 95.00th=[ 273], 00:14:12.087 | 99.00th=[ 351], 99.50th=[ 420], 99.90th=[ 865], 99.95th=[ 2089], 00:14:12.087 | 99.99th=[ 2089] 00:14:12.087 bw ( KiB/s): min= 4096, max= 4096, per=26.95%, avg=4096.00, stdev= 0.00, samples=2 00:14:12.087 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:14:12.087 lat (usec) : 250=77.11%, 500=21.13%, 750=0.52%, 1000=0.20% 00:14:12.087 lat (msec) : 2=0.07%, 4=0.07%, 50=0.91% 00:14:12.087 cpu : usr=1.88%, sys=3.07%, ctx=1541, majf=0, minf=1 00:14:12.087 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:12.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.087 issued rwts: total=514,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:12.087 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:12.087 00:14:12.087 Run status group 0 (all jobs): 00:14:12.087 READ: bw=9310KiB/s (9534kB/s), 2034KiB/s-3318KiB/s (2082kB/s-3397kB/s), io=9608KiB (9839kB), run=1001-1032msec 00:14:12.087 WRITE: bw=14.8MiB/s (15.6MB/s), 3393KiB/s-4051KiB/s (3474kB/s-4149kB/s), io=15.3MiB (16.1MB), run=1001-1032msec 00:14:12.087 00:14:12.087 Disk stats (read/write): 00:14:12.087 nvme0n1: ios=151/512, merge=0/0, ticks=725/110, in_queue=835, util=84.17% 00:14:12.087 nvme0n2: ios=837/1024, merge=0/0, ticks=702/192, in_queue=894, util=91.70% 00:14:12.087 nvme0n3: ios=577/1024, merge=0/0, ticks=686/195, in_queue=881, util=92.31% 00:14:12.087 nvme0n4: ios=562/586, merge=0/0, ticks=820/120, in_queue=940, util=94.71% 00:14:12.087 21:36:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:12.087 [global] 00:14:12.087 thread=1 00:14:12.087 invalidate=1 00:14:12.087 rw=randwrite 00:14:12.087 time_based=1 00:14:12.087 runtime=1 00:14:12.087 ioengine=libaio 00:14:12.087 direct=1 00:14:12.087 bs=4096 00:14:12.087 iodepth=1 00:14:12.087 norandommap=0 00:14:12.087 numjobs=1 00:14:12.087 00:14:12.087 verify_dump=1 00:14:12.087 verify_backlog=512 00:14:12.087 verify_state_save=0 00:14:12.087 do_verify=1 00:14:12.087 verify=crc32c-intel 00:14:12.087 [job0] 00:14:12.087 filename=/dev/nvme0n1 00:14:12.087 [job1] 00:14:12.087 filename=/dev/nvme0n2 00:14:12.087 [job2] 00:14:12.087 filename=/dev/nvme0n3 00:14:12.087 [job3] 00:14:12.087 filename=/dev/nvme0n4 00:14:12.087 Could not set queue depth (nvme0n1) 00:14:12.087 Could not set queue depth (nvme0n2) 00:14:12.087 Could not set queue depth (nvme0n3) 00:14:12.087 Could not set queue depth (nvme0n4) 00:14:12.345 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:12.345 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:12.345 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:12.345 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:12.345 fio-3.35 00:14:12.345 Starting 4 threads 00:14:13.717 00:14:13.717 job0: (groupid=0, jobs=1): err= 0: pid=342105: Mon Jul 15 21:36:04 2024 00:14:13.717 read: IOPS=299, BW=1199KiB/s (1228kB/s)(1204KiB/1004msec) 00:14:13.717 slat (nsec): min=7592, max=32830, avg=18611.73, stdev=6402.23 00:14:13.717 clat (usec): min=197, max=42093, avg=2911.93, stdev=9938.20 00:14:13.717 lat (usec): min=214, max=42109, avg=2930.54, stdev=9937.42 00:14:13.717 clat percentiles (usec): 00:14:13.717 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 225], 00:14:13.717 | 30.00th=[ 237], 40.00th=[ 247], 50.00th=[ 265], 60.00th=[ 289], 00:14:13.717 | 70.00th=[ 306], 80.00th=[ 375], 90.00th=[ 490], 95.00th=[40633], 00:14:13.717 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:14:13.717 | 99.99th=[42206] 00:14:13.717 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:14:13.717 slat (nsec): min=6634, max=72786, avg=17009.59, stdev=7957.32 00:14:13.717 clat (usec): min=132, max=463, avg=211.87, stdev=45.44 00:14:13.717 lat (usec): min=148, max=480, avg=228.88, stdev=45.03 00:14:13.717 clat percentiles (usec): 00:14:13.717 | 1.00th=[ 145], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 180], 00:14:13.717 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 208], 00:14:13.717 | 70.00th=[ 221], 80.00th=[ 258], 90.00th=[ 281], 95.00th=[ 289], 00:14:13.717 | 99.00th=[ 338], 99.50th=[ 420], 99.90th=[ 465], 99.95th=[ 465], 00:14:13.717 | 99.99th=[ 465] 00:14:13.717 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:14:13.717 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:13.717 lat (usec) : 250=64.33%, 500=32.35%, 750=0.86% 00:14:13.717 lat (msec) : 20=0.12%, 50=2.34% 00:14:13.717 cpu : usr=0.90%, sys=1.89%, ctx=813, majf=0, minf=1 00:14:13.717 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.718 issued rwts: total=301,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.718 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.718 job1: (groupid=0, jobs=1): err= 0: pid=342108: Mon Jul 15 21:36:04 2024 00:14:13.718 read: IOPS=462, BW=1849KiB/s (1893kB/s)(1908KiB/1032msec) 00:14:13.718 slat (nsec): min=5563, max=52755, avg=13914.67, stdev=6973.89 00:14:13.718 clat (usec): min=170, max=41614, avg=1943.67, stdev=8131.43 00:14:13.718 lat (usec): min=181, max=41626, avg=1957.59, stdev=8132.91 00:14:13.718 clat percentiles (usec): 00:14:13.718 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 196], 20.00th=[ 208], 00:14:13.718 | 30.00th=[ 217], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 243], 00:14:13.718 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 302], 95.00th=[ 465], 00:14:13.718 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:14:13.718 | 99.99th=[41681] 00:14:13.718 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:14:13.718 slat (nsec): min=6633, max=44556, avg=12534.73, stdev=5069.50 00:14:13.718 clat (usec): min=130, max=261, avg=170.40, stdev=24.25 00:14:13.718 lat (usec): min=137, max=297, avg=182.93, stdev=25.57 00:14:13.718 clat percentiles (usec): 00:14:13.718 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 153], 00:14:13.718 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 172], 00:14:13.718 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 196], 95.00th=[ 231], 00:14:13.718 | 99.00th=[ 245], 99.50th=[ 249], 99.90th=[ 262], 99.95th=[ 262], 00:14:13.718 | 99.99th=[ 262] 00:14:13.718 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:14:13.718 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:13.718 lat (usec) : 250=87.97%, 500=9.91% 00:14:13.718 lat (msec) : 4=0.10%, 50=2.02% 00:14:13.718 cpu : usr=0.78%, sys=1.36%, ctx=991, majf=0, minf=1 00:14:13.718 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.718 issued rwts: total=477,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.718 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.718 job2: (groupid=0, jobs=1): err= 0: pid=342114: Mon Jul 15 21:36:04 2024 00:14:13.718 read: IOPS=20, BW=83.6KiB/s (85.6kB/s)(84.0KiB/1005msec) 00:14:13.718 slat (nsec): min=14660, max=35964, avg=25321.00, stdev=8749.63 00:14:13.718 clat (usec): min=40795, max=42019, avg=41767.63, stdev=407.25 00:14:13.718 lat (usec): min=40812, max=42034, avg=41792.95, stdev=405.39 00:14:13.718 clat percentiles (usec): 00:14:13.718 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:14:13.718 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:14:13.718 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:13.718 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:13.718 | 99.99th=[42206] 00:14:13.718 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:14:13.718 slat (nsec): min=7983, max=40161, avg=17192.48, stdev=6588.04 00:14:13.718 clat (usec): min=147, max=418, avg=219.46, stdev=39.16 00:14:13.718 lat (usec): min=156, max=438, avg=236.65, stdev=38.01 00:14:13.718 clat percentiles (usec): 00:14:13.718 | 1.00th=[ 165], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 188], 00:14:13.718 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 215], 00:14:13.718 | 70.00th=[ 227], 80.00th=[ 258], 90.00th=[ 281], 95.00th=[ 289], 00:14:13.718 | 99.00th=[ 322], 99.50th=[ 375], 99.90th=[ 420], 99.95th=[ 420], 00:14:13.718 | 99.99th=[ 420] 00:14:13.718 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:14:13.718 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:13.718 lat (usec) : 250=75.80%, 500=20.26% 00:14:13.718 lat (msec) : 50=3.94% 00:14:13.718 cpu : usr=0.80%, sys=1.00%, ctx=535, majf=0, minf=1 00:14:13.718 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.718 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.718 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.718 job3: (groupid=0, jobs=1): err= 0: pid=342116: Mon Jul 15 21:36:04 2024 00:14:13.718 read: IOPS=163, BW=654KiB/s (670kB/s)(676KiB/1033msec) 00:14:13.718 slat (nsec): min=7938, max=50269, avg=21296.59, stdev=8861.08 00:14:13.718 clat (usec): min=204, max=41545, avg=5359.79, stdev=13393.14 00:14:13.718 lat (usec): min=213, max=41566, avg=5381.09, stdev=13392.91 00:14:13.718 clat percentiles (usec): 00:14:13.718 | 1.00th=[ 206], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 239], 00:14:13.718 | 30.00th=[ 255], 40.00th=[ 281], 50.00th=[ 310], 60.00th=[ 347], 00:14:13.718 | 70.00th=[ 371], 80.00th=[ 465], 90.00th=[40633], 95.00th=[40633], 00:14:13.718 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:14:13.718 | 99.99th=[41681] 00:14:13.718 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:14:13.718 slat (nsec): min=7511, max=50212, avg=16794.65, stdev=5984.85 00:14:13.718 clat (usec): min=152, max=400, avg=217.25, stdev=39.03 00:14:13.718 lat (usec): min=162, max=417, avg=234.04, stdev=38.78 00:14:13.718 clat percentiles (usec): 00:14:13.718 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 184], 00:14:13.718 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 208], 60.00th=[ 217], 00:14:13.718 | 70.00th=[ 231], 80.00th=[ 260], 90.00th=[ 277], 95.00th=[ 285], 00:14:13.718 | 99.00th=[ 302], 99.50th=[ 343], 99.90th=[ 400], 99.95th=[ 400], 00:14:13.718 | 99.99th=[ 400] 00:14:13.718 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:14:13.718 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:13.718 lat (usec) : 250=64.90%, 500=31.86% 00:14:13.718 lat (msec) : 4=0.15%, 50=3.08% 00:14:13.718 cpu : usr=0.48%, sys=1.45%, ctx=683, majf=0, minf=1 00:14:13.718 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.718 issued rwts: total=169,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.718 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.718 00:14:13.718 Run status group 0 (all jobs): 00:14:13.718 READ: bw=3748KiB/s (3838kB/s), 83.6KiB/s-1849KiB/s (85.6kB/s-1893kB/s), io=3872KiB (3965kB), run=1004-1033msec 00:14:13.718 WRITE: bw=7930KiB/s (8121kB/s), 1983KiB/s-2040KiB/s (2030kB/s-2089kB/s), io=8192KiB (8389kB), run=1004-1033msec 00:14:13.718 00:14:13.718 Disk stats (read/write): 00:14:13.718 nvme0n1: ios=288/512, merge=0/0, ticks=737/103, in_queue=840, util=86.57% 00:14:13.718 nvme0n2: ios=462/512, merge=0/0, ticks=1252/82, in_queue=1334, util=94.01% 00:14:13.718 nvme0n3: ios=74/512, merge=0/0, ticks=1197/113, in_queue=1310, util=97.50% 00:14:13.718 nvme0n4: ios=194/512, merge=0/0, ticks=864/106, in_queue=970, util=94.53% 00:14:13.718 21:36:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:13.718 [global] 00:14:13.718 thread=1 00:14:13.718 invalidate=1 00:14:13.718 rw=write 00:14:13.718 time_based=1 00:14:13.718 runtime=1 00:14:13.718 ioengine=libaio 00:14:13.718 direct=1 00:14:13.718 bs=4096 00:14:13.718 iodepth=128 00:14:13.718 norandommap=0 00:14:13.718 numjobs=1 00:14:13.718 00:14:13.718 verify_dump=1 00:14:13.718 verify_backlog=512 00:14:13.718 verify_state_save=0 00:14:13.718 do_verify=1 00:14:13.718 verify=crc32c-intel 00:14:13.718 [job0] 00:14:13.718 filename=/dev/nvme0n1 00:14:13.718 [job1] 00:14:13.718 filename=/dev/nvme0n2 00:14:13.718 [job2] 00:14:13.718 filename=/dev/nvme0n3 00:14:13.718 [job3] 00:14:13.718 filename=/dev/nvme0n4 00:14:13.718 Could not set queue depth (nvme0n1) 00:14:13.718 Could not set queue depth (nvme0n2) 00:14:13.718 Could not set queue depth (nvme0n3) 00:14:13.718 Could not set queue depth (nvme0n4) 00:14:13.718 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:13.718 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:13.718 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:13.718 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:13.718 fio-3.35 00:14:13.718 Starting 4 threads 00:14:15.092 00:14:15.092 job0: (groupid=0, jobs=1): err= 0: pid=342380: Mon Jul 15 21:36:05 2024 00:14:15.092 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:14:15.092 slat (usec): min=3, max=7879, avg=77.16, stdev=372.27 00:14:15.092 clat (usec): min=7031, max=17933, avg=10349.52, stdev=1373.80 00:14:15.092 lat (usec): min=7050, max=17943, avg=10426.68, stdev=1376.01 00:14:15.092 clat percentiles (usec): 00:14:15.092 | 1.00th=[ 7832], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9241], 00:14:15.092 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:14:15.092 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:14:15.092 | 99.00th=[17695], 99.50th=[17957], 99.90th=[17957], 99.95th=[17957], 00:14:15.092 | 99.99th=[17957] 00:14:15.092 write: IOPS=6009, BW=23.5MiB/s (24.6MB/s)(23.5MiB/1002msec); 0 zone resets 00:14:15.092 slat (usec): min=5, max=40977, avg=83.82, stdev=722.94 00:14:15.092 clat (usec): min=447, max=49119, avg=9924.62, stdev=1314.16 00:14:15.092 lat (usec): min=3144, max=76538, avg=10008.44, stdev=1566.00 00:14:15.092 clat percentiles (usec): 00:14:15.092 | 1.00th=[ 6783], 5.00th=[ 7832], 10.00th=[ 8848], 20.00th=[ 9372], 00:14:15.092 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10159], 00:14:15.092 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10814], 95.00th=[11207], 00:14:15.092 | 99.00th=[12780], 99.50th=[17171], 99.90th=[17171], 99.95th=[17171], 00:14:15.092 | 99.99th=[49021] 00:14:15.092 bw ( KiB/s): min=22928, max=24224, per=33.23%, avg=23576.00, stdev=916.41, samples=2 00:14:15.092 iops : min= 5732, max= 6056, avg=5894.00, stdev=229.10, samples=2 00:14:15.092 lat (usec) : 500=0.01% 00:14:15.092 lat (msec) : 4=0.32%, 10=42.06%, 20=57.60%, 50=0.01% 00:14:15.092 cpu : usr=6.49%, sys=13.49%, ctx=544, majf=0, minf=11 00:14:15.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:14:15.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:15.092 issued rwts: total=5632,6022,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.092 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:15.092 job1: (groupid=0, jobs=1): err= 0: pid=342381: Mon Jul 15 21:36:05 2024 00:14:15.092 read: IOPS=3060, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:14:15.092 slat (usec): min=2, max=8599, avg=126.80, stdev=708.73 00:14:15.092 clat (usec): min=2972, max=47368, avg=17462.40, stdev=6854.02 00:14:15.092 lat (usec): min=6186, max=47382, avg=17589.20, stdev=6912.11 00:14:15.092 clat percentiles (usec): 00:14:15.092 | 1.00th=[ 6652], 5.00th=[ 8291], 10.00th=[ 8717], 20.00th=[10683], 00:14:15.092 | 30.00th=[15139], 40.00th=[15795], 50.00th=[16319], 60.00th=[18220], 00:14:15.092 | 70.00th=[18744], 80.00th=[22152], 90.00th=[26608], 95.00th=[30278], 00:14:15.092 | 99.00th=[38536], 99.50th=[41157], 99.90th=[47449], 99.95th=[47449], 00:14:15.092 | 99.99th=[47449] 00:14:15.092 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:14:15.092 slat (usec): min=4, max=8455, avg=161.77, stdev=738.92 00:14:15.092 clat (usec): min=4735, max=61371, avg=20575.36, stdev=12074.13 00:14:15.092 lat (usec): min=4753, max=61406, avg=20737.13, stdev=12152.81 00:14:15.092 clat percentiles (usec): 00:14:15.092 | 1.00th=[ 7570], 5.00th=[ 8094], 10.00th=[ 8586], 20.00th=[13173], 00:14:15.092 | 30.00th=[13698], 40.00th=[14484], 50.00th=[16319], 60.00th=[20055], 00:14:15.092 | 70.00th=[21365], 80.00th=[26084], 90.00th=[39060], 95.00th=[50070], 00:14:15.092 | 99.00th=[58983], 99.50th=[60031], 99.90th=[61604], 99.95th=[61604], 00:14:15.092 | 99.99th=[61604] 00:14:15.092 bw ( KiB/s): min=11344, max=16384, per=19.54%, avg=13864.00, stdev=3563.82, samples=2 00:14:15.092 iops : min= 2836, max= 4096, avg=3466.00, stdev=890.95, samples=2 00:14:15.092 lat (msec) : 4=0.02%, 10=15.42%, 20=50.41%, 50=31.52%, 100=2.64% 00:14:15.092 cpu : usr=4.08%, sys=6.66%, ctx=345, majf=0, minf=15 00:14:15.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:14:15.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:15.092 issued rwts: total=3082,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.092 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:15.092 job2: (groupid=0, jobs=1): err= 0: pid=342382: Mon Jul 15 21:36:05 2024 00:14:15.092 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:14:15.092 slat (usec): min=2, max=10584, avg=95.06, stdev=601.25 00:14:15.092 clat (usec): min=3219, max=34454, avg=13091.93, stdev=4608.20 00:14:15.092 lat (usec): min=3223, max=34459, avg=13186.99, stdev=4636.53 00:14:15.092 clat percentiles (usec): 00:14:15.092 | 1.00th=[ 6456], 5.00th=[ 8455], 10.00th=[ 9503], 20.00th=[10945], 00:14:15.092 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11600], 60.00th=[11994], 00:14:15.092 | 70.00th=[12780], 80.00th=[14222], 90.00th=[21627], 95.00th=[23725], 00:14:15.092 | 99.00th=[30802], 99.50th=[32375], 99.90th=[34341], 99.95th=[34341], 00:14:15.092 | 99.99th=[34341] 00:14:15.092 write: IOPS=4659, BW=18.2MiB/s (19.1MB/s)(18.2MiB/1002msec); 0 zone resets 00:14:15.092 slat (usec): min=3, max=50909, avg=104.61, stdev=1099.05 00:14:15.092 clat (usec): min=543, max=104833, avg=11664.24, stdev=3435.23 00:14:15.092 lat (usec): min=786, max=104868, avg=11768.85, stdev=3719.55 00:14:15.092 clat percentiles (msec): 00:14:15.092 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 10], 20.00th=[ 11], 00:14:15.092 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:14:15.092 | 70.00th=[ 12], 80.00th=[ 14], 90.00th=[ 16], 95.00th=[ 17], 00:14:15.092 | 99.00th=[ 19], 99.50th=[ 21], 99.90th=[ 23], 99.95th=[ 62], 00:14:15.092 | 99.99th=[ 105] 00:14:15.092 bw ( KiB/s): min=16384, max=20480, per=25.98%, avg=18432.00, stdev=2896.31, samples=2 00:14:15.092 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:14:15.092 lat (usec) : 750=0.01%, 1000=0.03% 00:14:15.092 lat (msec) : 2=0.06%, 4=0.67%, 10=14.39%, 20=79.36%, 50=5.44% 00:14:15.092 lat (msec) : 100=0.01%, 250=0.02% 00:14:15.092 cpu : usr=4.20%, sys=8.59%, ctx=435, majf=0, minf=15 00:14:15.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:14:15.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:15.092 issued rwts: total=4608,4669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.092 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:15.092 job3: (groupid=0, jobs=1): err= 0: pid=342383: Mon Jul 15 21:36:05 2024 00:14:15.092 read: IOPS=3301, BW=12.9MiB/s (13.5MB/s)(13.0MiB/1007msec) 00:14:15.092 slat (usec): min=3, max=11789, avg=120.90, stdev=704.17 00:14:15.092 clat (usec): min=1393, max=39272, avg=14557.11, stdev=4219.37 00:14:15.092 lat (usec): min=6209, max=39283, avg=14678.01, stdev=4276.30 00:14:15.092 clat percentiles (usec): 00:14:15.092 | 1.00th=[ 6194], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[12125], 00:14:15.092 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13829], 60.00th=[14353], 00:14:15.092 | 70.00th=[14877], 80.00th=[16909], 90.00th=[19792], 95.00th=[22152], 00:14:15.092 | 99.00th=[31851], 99.50th=[33817], 99.90th=[39060], 99.95th=[39060], 00:14:15.092 | 99.99th=[39060] 00:14:15.092 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:14:15.092 slat (usec): min=4, max=8056, avg=159.56, stdev=647.58 00:14:15.092 clat (usec): min=973, max=56202, avg=22126.85, stdev=10248.56 00:14:15.092 lat (usec): min=982, max=56222, avg=22286.41, stdev=10315.19 00:14:15.092 clat percentiles (usec): 00:14:15.092 | 1.00th=[ 7242], 5.00th=[10159], 10.00th=[10421], 20.00th=[12911], 00:14:15.092 | 30.00th=[13698], 40.00th=[17433], 50.00th=[21103], 60.00th=[23725], 00:14:15.092 | 70.00th=[26346], 80.00th=[30278], 90.00th=[36439], 95.00th=[42206], 00:14:15.092 | 99.00th=[48497], 99.50th=[52167], 99.90th=[56361], 99.95th=[56361], 00:14:15.092 | 99.99th=[56361] 00:14:15.092 bw ( KiB/s): min=13392, max=15280, per=20.21%, avg=14336.00, stdev=1335.02, samples=2 00:14:15.092 iops : min= 3348, max= 3820, avg=3584.00, stdev=333.75, samples=2 00:14:15.092 lat (usec) : 1000=0.14% 00:14:15.092 lat (msec) : 2=0.03%, 10=6.57%, 20=60.34%, 50=32.58%, 100=0.33% 00:14:15.092 cpu : usr=3.38%, sys=4.37%, ctx=419, majf=0, minf=9 00:14:15.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:14:15.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:15.093 issued rwts: total=3325,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.093 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:15.093 00:14:15.093 Run status group 0 (all jobs): 00:14:15.093 READ: bw=64.6MiB/s (67.7MB/s), 12.0MiB/s-22.0MiB/s (12.5MB/s-23.0MB/s), io=65.0MiB (68.2MB), run=1002-1007msec 00:14:15.093 WRITE: bw=69.3MiB/s (72.6MB/s), 13.9MiB/s-23.5MiB/s (14.6MB/s-24.6MB/s), io=69.8MiB (73.1MB), run=1002-1007msec 00:14:15.093 00:14:15.093 Disk stats (read/write): 00:14:15.093 nvme0n1: ios=4785/5120, merge=0/0, ticks=15742/15313, in_queue=31055, util=87.58% 00:14:15.093 nvme0n2: ios=2988/3072, merge=0/0, ticks=15986/20018, in_queue=36004, util=91.59% 00:14:15.093 nvme0n3: ios=3638/4096, merge=0/0, ticks=25581/27428, in_queue=53009, util=94.71% 00:14:15.093 nvme0n4: ios=2617/3031, merge=0/0, ticks=19074/35490, in_queue=54564, util=94.36% 00:14:15.093 21:36:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:15.093 [global] 00:14:15.093 thread=1 00:14:15.093 invalidate=1 00:14:15.093 rw=randwrite 00:14:15.093 time_based=1 00:14:15.093 runtime=1 00:14:15.093 ioengine=libaio 00:14:15.093 direct=1 00:14:15.093 bs=4096 00:14:15.093 iodepth=128 00:14:15.093 norandommap=0 00:14:15.093 numjobs=1 00:14:15.093 00:14:15.093 verify_dump=1 00:14:15.093 verify_backlog=512 00:14:15.093 verify_state_save=0 00:14:15.093 do_verify=1 00:14:15.093 verify=crc32c-intel 00:14:15.093 [job0] 00:14:15.093 filename=/dev/nvme0n1 00:14:15.093 [job1] 00:14:15.093 filename=/dev/nvme0n2 00:14:15.093 [job2] 00:14:15.093 filename=/dev/nvme0n3 00:14:15.093 [job3] 00:14:15.093 filename=/dev/nvme0n4 00:14:15.093 Could not set queue depth (nvme0n1) 00:14:15.093 Could not set queue depth (nvme0n2) 00:14:15.093 Could not set queue depth (nvme0n3) 00:14:15.093 Could not set queue depth (nvme0n4) 00:14:15.093 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:15.093 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:15.093 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:15.093 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:15.093 fio-3.35 00:14:15.093 Starting 4 threads 00:14:16.465 00:14:16.465 job0: (groupid=0, jobs=1): err= 0: pid=342562: Mon Jul 15 21:36:06 2024 00:14:16.466 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:14:16.466 slat (usec): min=2, max=16009, avg=110.10, stdev=756.22 00:14:16.466 clat (usec): min=3272, max=90632, avg=13151.80, stdev=9765.95 00:14:16.466 lat (usec): min=3278, max=90648, avg=13261.90, stdev=9843.79 00:14:16.466 clat percentiles (usec): 00:14:16.466 | 1.00th=[ 4686], 5.00th=[ 6652], 10.00th=[ 8225], 20.00th=[ 9503], 00:14:16.466 | 30.00th=[10290], 40.00th=[10683], 50.00th=[10945], 60.00th=[11600], 00:14:16.466 | 70.00th=[12518], 80.00th=[14615], 90.00th=[17695], 95.00th=[21890], 00:14:16.466 | 99.00th=[77071], 99.50th=[89654], 99.90th=[90702], 99.95th=[90702], 00:14:16.466 | 99.99th=[90702] 00:14:16.466 write: IOPS=4599, BW=18.0MiB/s (18.8MB/s)(18.1MiB/1007msec); 0 zone resets 00:14:16.466 slat (usec): min=3, max=11446, avg=96.62, stdev=614.08 00:14:16.466 clat (usec): min=3106, max=95598, avg=14421.48, stdev=13681.83 00:14:16.466 lat (usec): min=3121, max=95609, avg=14518.10, stdev=13746.10 00:14:16.466 clat percentiles (usec): 00:14:16.466 | 1.00th=[ 4293], 5.00th=[ 5735], 10.00th=[ 7111], 20.00th=[ 9372], 00:14:16.466 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10552], 60.00th=[10945], 00:14:16.466 | 70.00th=[12125], 80.00th=[13960], 90.00th=[20579], 95.00th=[42206], 00:14:16.466 | 99.00th=[86508], 99.50th=[91751], 99.90th=[95945], 99.95th=[95945], 00:14:16.466 | 99.99th=[95945] 00:14:16.466 bw ( KiB/s): min=16152, max=20712, per=27.96%, avg=18432.00, stdev=3224.41, samples=2 00:14:16.466 iops : min= 4038, max= 5178, avg=4608.00, stdev=806.10, samples=2 00:14:16.466 lat (msec) : 4=0.81%, 10=28.16%, 20=61.98%, 50=6.14%, 100=2.91% 00:14:16.466 cpu : usr=6.16%, sys=7.26%, ctx=456, majf=0, minf=19 00:14:16.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:14:16.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:16.466 issued rwts: total=4608,4632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:16.466 job1: (groupid=0, jobs=1): err= 0: pid=342563: Mon Jul 15 21:36:06 2024 00:14:16.466 read: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1013msec) 00:14:16.466 slat (usec): min=3, max=11080, avg=136.10, stdev=837.53 00:14:16.466 clat (usec): min=5229, max=60990, avg=15020.67, stdev=7360.75 00:14:16.466 lat (usec): min=5239, max=61006, avg=15156.77, stdev=7456.06 00:14:16.466 clat percentiles (usec): 00:14:16.466 | 1.00th=[ 9372], 5.00th=[ 9634], 10.00th=[ 9634], 20.00th=[10290], 00:14:16.466 | 30.00th=[11207], 40.00th=[12256], 50.00th=[13173], 60.00th=[13435], 00:14:16.466 | 70.00th=[14091], 80.00th=[17171], 90.00th=[23200], 95.00th=[30016], 00:14:16.466 | 99.00th=[46924], 99.50th=[53216], 99.90th=[61080], 99.95th=[61080], 00:14:16.466 | 99.99th=[61080] 00:14:16.466 write: IOPS=3275, BW=12.8MiB/s (13.4MB/s)(13.0MiB/1013msec); 0 zone resets 00:14:16.466 slat (usec): min=5, max=15155, avg=165.11, stdev=818.12 00:14:16.466 clat (usec): min=1540, max=60950, avg=24805.78, stdev=12465.03 00:14:16.466 lat (usec): min=1552, max=60959, avg=24970.89, stdev=12542.41 00:14:16.466 clat percentiles (usec): 00:14:16.466 | 1.00th=[ 5080], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[12911], 00:14:16.466 | 30.00th=[17171], 40.00th=[18482], 50.00th=[22676], 60.00th=[26870], 00:14:16.466 | 70.00th=[30540], 80.00th=[35390], 90.00th=[43779], 95.00th=[48497], 00:14:16.466 | 99.00th=[53740], 99.50th=[55837], 99.90th=[58459], 99.95th=[61080], 00:14:16.466 | 99.99th=[61080] 00:14:16.466 bw ( KiB/s): min=12080, max=13440, per=19.36%, avg=12760.00, stdev=961.67, samples=2 00:14:16.466 iops : min= 3020, max= 3360, avg=3190.00, stdev=240.42, samples=2 00:14:16.466 lat (msec) : 2=0.03%, 4=0.22%, 10=13.80%, 20=49.08%, 50=34.30% 00:14:16.466 lat (msec) : 100=2.57% 00:14:16.466 cpu : usr=4.05%, sys=7.71%, ctx=346, majf=0, minf=11 00:14:16.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:14:16.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:16.466 issued rwts: total=3072,3318,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:16.466 job2: (groupid=0, jobs=1): err= 0: pid=342564: Mon Jul 15 21:36:06 2024 00:14:16.466 read: IOPS=3510, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1007msec) 00:14:16.466 slat (usec): min=2, max=19167, avg=148.51, stdev=1024.00 00:14:16.466 clat (usec): min=2547, max=70430, avg=17372.64, stdev=11019.54 00:14:16.466 lat (usec): min=5657, max=70464, avg=17521.15, stdev=11110.10 00:14:16.466 clat percentiles (usec): 00:14:16.466 | 1.00th=[ 5735], 5.00th=[ 8094], 10.00th=[10290], 20.00th=[10814], 00:14:16.466 | 30.00th=[11207], 40.00th=[12780], 50.00th=[14353], 60.00th=[15795], 00:14:16.466 | 70.00th=[16712], 80.00th=[18482], 90.00th=[33424], 95.00th=[45351], 00:14:16.466 | 99.00th=[60031], 99.50th=[60031], 99.90th=[60031], 99.95th=[62653], 00:14:16.466 | 99.99th=[70779] 00:14:16.466 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:14:16.466 slat (usec): min=4, max=18454, avg=127.09, stdev=891.78 00:14:16.466 clat (usec): min=993, max=62482, avg=18519.71, stdev=11834.23 00:14:16.466 lat (usec): min=1002, max=62494, avg=18646.79, stdev=11911.84 00:14:16.466 clat percentiles (usec): 00:14:16.466 | 1.00th=[ 3785], 5.00th=[ 7832], 10.00th=[ 9503], 20.00th=[10945], 00:14:16.466 | 30.00th=[11338], 40.00th=[12125], 50.00th=[14091], 60.00th=[15401], 00:14:16.466 | 70.00th=[17957], 80.00th=[25035], 90.00th=[37487], 95.00th=[47449], 00:14:16.466 | 99.00th=[53740], 99.50th=[58459], 99.90th=[62653], 99.95th=[62653], 00:14:16.466 | 99.99th=[62653] 00:14:16.466 bw ( KiB/s): min=12288, max=16384, per=21.75%, avg=14336.00, stdev=2896.31, samples=2 00:14:16.466 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:14:16.466 lat (usec) : 1000=0.04% 00:14:16.466 lat (msec) : 2=0.22%, 4=0.35%, 10=10.13%, 20=67.54%, 50=18.94% 00:14:16.466 lat (msec) : 100=2.78% 00:14:16.466 cpu : usr=2.88%, sys=4.57%, ctx=324, majf=0, minf=9 00:14:16.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:14:16.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:16.466 issued rwts: total=3535,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:16.466 job3: (groupid=0, jobs=1): err= 0: pid=342565: Mon Jul 15 21:36:06 2024 00:14:16.466 read: IOPS=4607, BW=18.0MiB/s (18.9MB/s)(18.2MiB/1012msec) 00:14:16.466 slat (usec): min=2, max=22164, avg=105.45, stdev=769.51 00:14:16.466 clat (usec): min=5077, max=50193, avg=14319.19, stdev=5864.10 00:14:16.466 lat (usec): min=5111, max=50274, avg=14424.64, stdev=5912.50 00:14:16.466 clat percentiles (usec): 00:14:16.466 | 1.00th=[ 7242], 5.00th=[ 8356], 10.00th=[10421], 20.00th=[11076], 00:14:16.466 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12780], 00:14:16.466 | 70.00th=[15008], 80.00th=[17695], 90.00th=[20579], 95.00th=[25822], 00:14:16.466 | 99.00th=[37487], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:14:16.466 | 99.99th=[50070] 00:14:16.466 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.2MiB/1012msec); 0 zone resets 00:14:16.466 slat (usec): min=4, max=18088, avg=85.74, stdev=696.98 00:14:16.466 clat (usec): min=268, max=51221, avg=11940.27, stdev=5935.03 00:14:16.466 lat (usec): min=307, max=51233, avg=12026.02, stdev=5964.64 00:14:16.466 clat percentiles (usec): 00:14:16.466 | 1.00th=[ 3392], 5.00th=[ 4359], 10.00th=[ 6128], 20.00th=[ 8979], 00:14:16.466 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11207], 60.00th=[11338], 00:14:16.466 | 70.00th=[11600], 80.00th=[13566], 90.00th=[19268], 95.00th=[23987], 00:14:16.466 | 99.00th=[38011], 99.50th=[50594], 99.90th=[51119], 99.95th=[51119], 00:14:16.466 | 99.99th=[51119] 00:14:16.466 bw ( KiB/s): min=18400, max=22304, per=30.87%, avg=20352.00, stdev=2760.54, samples=2 00:14:16.466 iops : min= 4600, max= 5576, avg=5088.00, stdev=690.14, samples=2 00:14:16.466 lat (usec) : 500=0.02%, 750=0.06% 00:14:16.466 lat (msec) : 2=0.32%, 4=1.37%, 10=16.71%, 20=71.24%, 50=9.92% 00:14:16.466 lat (msec) : 100=0.35% 00:14:16.466 cpu : usr=3.86%, sys=7.81%, ctx=314, majf=0, minf=11 00:14:16.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:16.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:16.466 issued rwts: total=4663,5161,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:16.466 00:14:16.466 Run status group 0 (all jobs): 00:14:16.466 READ: bw=61.2MiB/s (64.2MB/s), 11.8MiB/s-18.0MiB/s (12.4MB/s-18.9MB/s), io=62.0MiB (65.0MB), run=1007-1013msec 00:14:16.466 WRITE: bw=64.4MiB/s (67.5MB/s), 12.8MiB/s-19.9MiB/s (13.4MB/s-20.9MB/s), io=65.2MiB (68.4MB), run=1007-1013msec 00:14:16.466 00:14:16.466 Disk stats (read/write): 00:14:16.467 nvme0n1: ios=4146/4279, merge=0/0, ticks=35065/43239, in_queue=78304, util=95.79% 00:14:16.467 nvme0n2: ios=2188/2560, merge=0/0, ticks=33945/70867, in_queue=104812, util=87.11% 00:14:16.467 nvme0n3: ios=3129/3491, merge=0/0, ticks=20249/25722, in_queue=45971, util=97.29% 00:14:16.467 nvme0n4: ios=3985/4096, merge=0/0, ticks=30501/30356, in_queue=60857, util=99.37% 00:14:16.467 21:36:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:16.467 21:36:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=342675 00:14:16.467 21:36:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:16.467 21:36:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:16.467 [global] 00:14:16.467 thread=1 00:14:16.467 invalidate=1 00:14:16.467 rw=read 00:14:16.467 time_based=1 00:14:16.467 runtime=10 00:14:16.467 ioengine=libaio 00:14:16.467 direct=1 00:14:16.467 bs=4096 00:14:16.467 iodepth=1 00:14:16.467 norandommap=1 00:14:16.467 numjobs=1 00:14:16.467 00:14:16.467 [job0] 00:14:16.467 filename=/dev/nvme0n1 00:14:16.467 [job1] 00:14:16.467 filename=/dev/nvme0n2 00:14:16.467 [job2] 00:14:16.467 filename=/dev/nvme0n3 00:14:16.467 [job3] 00:14:16.467 filename=/dev/nvme0n4 00:14:16.467 Could not set queue depth (nvme0n1) 00:14:16.467 Could not set queue depth (nvme0n2) 00:14:16.467 Could not set queue depth (nvme0n3) 00:14:16.467 Could not set queue depth (nvme0n4) 00:14:16.467 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:16.467 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:16.467 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:16.467 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:16.467 fio-3.35 00:14:16.467 Starting 4 threads 00:14:19.746 21:36:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:19.746 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=35942400, buflen=4096 00:14:19.746 fio: pid=342750, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:19.746 21:36:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:20.004 21:36:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:20.004 21:36:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:20.004 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=31031296, buflen=4096 00:14:20.004 fio: pid=342749, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:20.262 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=52912128, buflen=4096 00:14:20.262 fio: pid=342747, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:20.262 21:36:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:20.262 21:36:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:20.521 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=45764608, buflen=4096 00:14:20.521 fio: pid=342748, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:20.521 21:36:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:20.521 21:36:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:20.521 00:14:20.521 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=342747: Mon Jul 15 21:36:11 2024 00:14:20.521 read: IOPS=3671, BW=14.3MiB/s (15.0MB/s)(50.5MiB/3519msec) 00:14:20.521 slat (usec): min=4, max=15787, avg=11.64, stdev=201.91 00:14:20.521 clat (usec): min=158, max=41900, avg=257.04, stdev=911.75 00:14:20.521 lat (usec): min=163, max=41915, avg=268.68, stdev=933.90 00:14:20.521 clat percentiles (usec): 00:14:20.522 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 192], 00:14:20.522 | 30.00th=[ 200], 40.00th=[ 210], 50.00th=[ 225], 60.00th=[ 241], 00:14:20.522 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 281], 95.00th=[ 322], 00:14:20.522 | 99.00th=[ 502], 99.50th=[ 529], 99.90th=[ 1598], 99.95th=[26870], 00:14:20.522 | 99.99th=[41157] 00:14:20.522 bw ( KiB/s): min=11416, max=17176, per=33.33%, avg=14118.67, stdev=2313.26, samples=6 00:14:20.522 iops : min= 2854, max= 4294, avg=3529.67, stdev=578.32, samples=6 00:14:20.522 lat (usec) : 250=69.98%, 500=28.97%, 750=0.84%, 1000=0.06% 00:14:20.522 lat (msec) : 2=0.05%, 4=0.02%, 20=0.02%, 50=0.05% 00:14:20.522 cpu : usr=1.53%, sys=3.87%, ctx=12925, majf=0, minf=1 00:14:20.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:20.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.522 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.522 issued rwts: total=12919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:20.522 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=342748: Mon Jul 15 21:36:11 2024 00:14:20.522 read: IOPS=2925, BW=11.4MiB/s (12.0MB/s)(43.6MiB/3819msec) 00:14:20.522 slat (usec): min=4, max=15223, avg=12.92, stdev=259.06 00:14:20.522 clat (usec): min=168, max=42189, avg=324.69, stdev=1778.73 00:14:20.522 lat (usec): min=174, max=42195, avg=337.61, stdev=1797.62 00:14:20.522 clat percentiles (usec): 00:14:20.522 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 198], 00:14:20.522 | 30.00th=[ 204], 40.00th=[ 212], 50.00th=[ 227], 60.00th=[ 249], 00:14:20.522 | 70.00th=[ 260], 80.00th=[ 277], 90.00th=[ 306], 95.00th=[ 371], 00:14:20.522 | 99.00th=[ 515], 99.50th=[ 627], 99.90th=[41157], 99.95th=[41157], 00:14:20.522 | 99.99th=[41157] 00:14:20.522 bw ( KiB/s): min= 1720, max=17725, per=26.40%, avg=11184.71, stdev=5211.90, samples=7 00:14:20.522 iops : min= 430, max= 4431, avg=2796.14, stdev=1302.92, samples=7 00:14:20.522 lat (usec) : 250=61.89%, 500=36.59%, 750=1.12%, 1000=0.08% 00:14:20.522 lat (msec) : 2=0.08%, 4=0.02%, 10=0.01%, 50=0.20% 00:14:20.522 cpu : usr=1.75%, sys=3.43%, ctx=11180, majf=0, minf=1 00:14:20.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:20.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.522 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.522 issued rwts: total=11174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:20.522 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=342749: Mon Jul 15 21:36:11 2024 00:14:20.522 read: IOPS=2320, BW=9281KiB/s (9504kB/s)(29.6MiB/3265msec) 00:14:20.522 slat (nsec): min=5098, max=41285, avg=9245.54, stdev=4102.49 00:14:20.522 clat (usec): min=172, max=41966, avg=415.82, stdev=2502.37 00:14:20.522 lat (usec): min=177, max=41980, avg=425.07, stdev=2502.88 00:14:20.522 clat percentiles (usec): 00:14:20.522 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 215], 00:14:20.522 | 30.00th=[ 229], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 260], 00:14:20.522 | 70.00th=[ 273], 80.00th=[ 289], 90.00th=[ 314], 95.00th=[ 363], 00:14:20.522 | 99.00th=[ 515], 99.50th=[ 914], 99.90th=[41157], 99.95th=[41157], 00:14:20.522 | 99.99th=[42206] 00:14:20.522 bw ( KiB/s): min= 1688, max=14792, per=23.77%, avg=10070.67, stdev=5841.52, samples=6 00:14:20.522 iops : min= 422, max= 3698, avg=2517.67, stdev=1460.38, samples=6 00:14:20.522 lat (usec) : 250=50.60%, 500=47.92%, 750=0.88%, 1000=0.11% 00:14:20.522 lat (msec) : 2=0.03%, 4=0.04%, 10=0.03%, 50=0.38% 00:14:20.522 cpu : usr=1.10%, sys=3.09%, ctx=7581, majf=0, minf=1 00:14:20.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:20.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.522 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.522 issued rwts: total=7577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:20.522 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=342750: Mon Jul 15 21:36:11 2024 00:14:20.522 read: IOPS=3011, BW=11.8MiB/s (12.3MB/s)(34.3MiB/2914msec) 00:14:20.522 slat (nsec): min=5057, max=44632, avg=8668.21, stdev=4250.08 00:14:20.522 clat (usec): min=178, max=42055, avg=320.24, stdev=1581.42 00:14:20.522 lat (usec): min=184, max=42061, avg=328.91, stdev=1581.79 00:14:20.522 clat percentiles (usec): 00:14:20.522 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 206], 00:14:20.522 | 30.00th=[ 215], 40.00th=[ 225], 50.00th=[ 241], 60.00th=[ 251], 00:14:20.522 | 70.00th=[ 265], 80.00th=[ 285], 90.00th=[ 314], 95.00th=[ 379], 00:14:20.522 | 99.00th=[ 515], 99.50th=[ 644], 99.90th=[41157], 99.95th=[41157], 00:14:20.522 | 99.99th=[42206] 00:14:20.522 bw ( KiB/s): min= 6440, max=14168, per=27.59%, avg=11688.00, stdev=3044.44, samples=5 00:14:20.522 iops : min= 1610, max= 3542, avg=2922.00, stdev=761.11, samples=5 00:14:20.522 lat (usec) : 250=58.50%, 500=39.23%, 750=1.85%, 1000=0.14% 00:14:20.522 lat (msec) : 2=0.05%, 4=0.02%, 10=0.03%, 20=0.01%, 50=0.16% 00:14:20.522 cpu : usr=1.44%, sys=3.81%, ctx=8778, majf=0, minf=1 00:14:20.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:20.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.522 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.522 issued rwts: total=8776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:20.522 00:14:20.522 Run status group 0 (all jobs): 00:14:20.522 READ: bw=41.4MiB/s (43.4MB/s), 9281KiB/s-14.3MiB/s (9504kB/s-15.0MB/s), io=158MiB (166MB), run=2914-3819msec 00:14:20.522 00:14:20.522 Disk stats (read/write): 00:14:20.522 nvme0n1: ios=12330/0, merge=0/0, ticks=3143/0, in_queue=3143, util=95.57% 00:14:20.522 nvme0n2: ios=10252/0, merge=0/0, ticks=3368/0, in_queue=3368, util=95.26% 00:14:20.522 nvme0n3: ios=7618/0, merge=0/0, ticks=3825/0, in_queue=3825, util=98.85% 00:14:20.522 nvme0n4: ios=8652/0, merge=0/0, ticks=2847/0, in_queue=2847, util=100.00% 00:14:20.779 21:36:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:20.779 21:36:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:21.036 21:36:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:21.036 21:36:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:21.601 21:36:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:21.601 21:36:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:21.858 21:36:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:21.858 21:36:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:22.115 21:36:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:22.115 21:36:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 342675 00:14:22.115 21:36:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:22.115 21:36:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:22.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.115 21:36:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:22.115 21:36:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:14:22.115 21:36:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:22.115 21:36:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:22.115 21:36:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:22.115 21:36:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:22.115 21:36:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:14:22.115 21:36:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:22.115 21:36:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:22.115 nvmf hotplug test: fio failed as expected 00:14:22.115 21:36:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:22.373 21:36:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:22.373 21:36:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:22.373 21:36:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:22.373 21:36:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:22.373 21:36:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:22.373 21:36:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:22.373 21:36:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:14:22.373 21:36:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:22.373 21:36:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:14:22.373 21:36:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:22.373 21:36:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:22.373 rmmod nvme_tcp 00:14:22.373 rmmod nvme_fabrics 00:14:22.373 rmmod nvme_keyring 00:14:22.641 21:36:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:22.641 21:36:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:14:22.641 21:36:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:14:22.641 21:36:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 340979 ']' 00:14:22.641 21:36:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 340979 00:14:22.641 21:36:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 340979 ']' 00:14:22.641 21:36:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 340979 00:14:22.641 21:36:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:14:22.641 21:36:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:22.641 21:36:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 340979 00:14:22.641 21:36:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:22.641 21:36:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:22.641 21:36:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 340979' 00:14:22.641 killing process with pid 340979 00:14:22.641 21:36:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 340979 00:14:22.641 21:36:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 340979 00:14:22.641 21:36:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:22.641 21:36:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:22.641 21:36:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:22.641 21:36:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:22.641 21:36:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:22.641 21:36:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.641 21:36:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.641 21:36:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.178 21:36:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:25.178 00:14:25.178 real 0m23.774s 00:14:25.178 user 1m23.716s 00:14:25.178 sys 0m7.172s 00:14:25.178 21:36:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:25.178 21:36:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.178 ************************************ 00:14:25.178 END TEST nvmf_fio_target 00:14:25.178 ************************************ 00:14:25.178 21:36:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:25.178 21:36:15 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:25.178 21:36:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:25.178 21:36:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:25.178 21:36:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:25.178 ************************************ 00:14:25.178 START TEST nvmf_bdevio 00:14:25.178 ************************************ 00:14:25.178 21:36:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:25.178 * Looking for test storage... 00:14:25.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:25.178 21:36:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:25.178 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:25.178 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.178 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.178 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.178 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.178 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.178 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.178 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.178 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.178 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.178 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.178 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:25.178 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:14:25.178 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.178 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.178 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:25.178 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:25.178 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:25.178 21:36:15 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.178 21:36:15 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.178 21:36:15 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:14:25.179 21:36:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:14:26.555 Found 0000:08:00.0 (0x8086 - 0x159b) 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:14:26.555 Found 0000:08:00.1 (0x8086 - 0x159b) 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:26.555 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:14:26.556 Found net devices under 0000:08:00.0: cvl_0_0 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:14:26.556 Found net devices under 0000:08:00.1: cvl_0_1 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:26.556 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:26.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:14:26.813 00:14:26.813 --- 10.0.0.2 ping statistics --- 00:14:26.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.813 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:26.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:14:26.813 00:14:26.813 --- 10.0.0.1 ping statistics --- 00:14:26.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.813 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=345348 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 345348 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 345348 ']' 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:26.813 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:26.813 [2024-07-15 21:36:17.441454] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:14:26.813 [2024-07-15 21:36:17.441556] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.813 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.813 [2024-07-15 21:36:17.506955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:27.070 [2024-07-15 21:36:17.624598] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.070 [2024-07-15 21:36:17.624655] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.070 [2024-07-15 21:36:17.624678] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.070 [2024-07-15 21:36:17.624692] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.070 [2024-07-15 21:36:17.624704] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.070 [2024-07-15 21:36:17.624806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:27.070 [2024-07-15 21:36:17.624887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:27.070 [2024-07-15 21:36:17.624985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:27.070 [2024-07-15 21:36:17.624993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:27.070 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:27.070 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:14:27.070 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:27.070 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:27.070 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:27.070 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.070 21:36:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:27.070 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.070 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:27.070 [2024-07-15 21:36:17.773954] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.070 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.070 21:36:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:27.070 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.070 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:27.070 Malloc0 00:14:27.070 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.071 21:36:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:27.071 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.071 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:27.071 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.071 21:36:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:27.071 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.071 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:27.071 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.071 21:36:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:27.071 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.071 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:27.071 [2024-07-15 21:36:17.824380] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.071 21:36:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.071 21:36:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:27.071 21:36:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:27.071 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:14:27.071 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:14:27.071 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:27.071 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:27.071 { 00:14:27.071 "params": { 00:14:27.071 "name": "Nvme$subsystem", 00:14:27.071 "trtype": "$TEST_TRANSPORT", 00:14:27.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:27.071 "adrfam": "ipv4", 00:14:27.071 "trsvcid": "$NVMF_PORT", 00:14:27.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:27.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:27.071 "hdgst": ${hdgst:-false}, 00:14:27.071 "ddgst": ${ddgst:-false} 00:14:27.071 }, 00:14:27.071 "method": "bdev_nvme_attach_controller" 00:14:27.071 } 00:14:27.071 EOF 00:14:27.071 )") 00:14:27.071 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:14:27.071 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:14:27.071 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:14:27.071 21:36:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:27.071 "params": { 00:14:27.071 "name": "Nvme1", 00:14:27.071 "trtype": "tcp", 00:14:27.071 "traddr": "10.0.0.2", 00:14:27.071 "adrfam": "ipv4", 00:14:27.071 "trsvcid": "4420", 00:14:27.071 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:27.071 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:27.071 "hdgst": false, 00:14:27.071 "ddgst": false 00:14:27.071 }, 00:14:27.071 "method": "bdev_nvme_attach_controller" 00:14:27.071 }' 00:14:27.328 [2024-07-15 21:36:17.875526] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:14:27.328 [2024-07-15 21:36:17.875621] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid345407 ] 00:14:27.328 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.328 [2024-07-15 21:36:17.937456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:27.328 [2024-07-15 21:36:18.050752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.328 [2024-07-15 21:36:18.050806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.328 [2024-07-15 21:36:18.050810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.586 I/O targets: 00:14:27.586 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:27.586 00:14:27.586 00:14:27.586 CUnit - A unit testing framework for C - Version 2.1-3 00:14:27.586 http://cunit.sourceforge.net/ 00:14:27.586 00:14:27.586 00:14:27.586 Suite: bdevio tests on: Nvme1n1 00:14:27.586 Test: blockdev write read block ...passed 00:14:27.586 Test: blockdev write zeroes read block ...passed 00:14:27.586 Test: blockdev write zeroes read no split ...passed 00:14:27.586 Test: blockdev write zeroes read split ...passed 00:14:27.586 Test: blockdev write zeroes read split partial ...passed 00:14:27.586 Test: blockdev reset ...[2024-07-15 21:36:18.370158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:27.586 [2024-07-15 21:36:18.370266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x851f60 (9): Bad file descriptor 00:14:27.843 [2024-07-15 21:36:18.426126] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:27.843 passed 00:14:27.843 Test: blockdev write read 8 blocks ...passed 00:14:27.843 Test: blockdev write read size > 128k ...passed 00:14:27.843 Test: blockdev write read invalid size ...passed 00:14:27.843 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:27.843 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:27.843 Test: blockdev write read max offset ...passed 00:14:27.843 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:27.843 Test: blockdev writev readv 8 blocks ...passed 00:14:27.843 Test: blockdev writev readv 30 x 1block ...passed 00:14:27.843 Test: blockdev writev readv block ...passed 00:14:27.843 Test: blockdev writev readv size > 128k ...passed 00:14:28.101 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:28.101 Test: blockdev comparev and writev ...[2024-07-15 21:36:18.678118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:28.101 [2024-07-15 21:36:18.678163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:28.101 [2024-07-15 21:36:18.678187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:28.101 [2024-07-15 21:36:18.678203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:28.101 [2024-07-15 21:36:18.678495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:28.101 [2024-07-15 21:36:18.678518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:28.101 [2024-07-15 21:36:18.678548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:28.101 [2024-07-15 21:36:18.678563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:28.101 [2024-07-15 21:36:18.678841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:28.101 [2024-07-15 21:36:18.678864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:28.101 [2024-07-15 21:36:18.678885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:28.101 [2024-07-15 21:36:18.678900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:28.101 [2024-07-15 21:36:18.679189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:28.101 [2024-07-15 21:36:18.679212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:28.101 [2024-07-15 21:36:18.679233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:28.101 [2024-07-15 21:36:18.679248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:28.101 passed 00:14:28.101 Test: blockdev nvme passthru rw ...passed 00:14:28.101 Test: blockdev nvme passthru vendor specific ...[2024-07-15 21:36:18.761358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:28.101 [2024-07-15 21:36:18.761385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:28.101 [2024-07-15 21:36:18.761522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:28.101 [2024-07-15 21:36:18.761545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:28.101 [2024-07-15 21:36:18.761677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:28.101 [2024-07-15 21:36:18.761700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:28.101 [2024-07-15 21:36:18.761833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:28.101 [2024-07-15 21:36:18.761855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:28.101 passed 00:14:28.101 Test: blockdev nvme admin passthru ...passed 00:14:28.101 Test: blockdev copy ...passed 00:14:28.101 00:14:28.101 Run Summary: Type Total Ran Passed Failed Inactive 00:14:28.101 suites 1 1 n/a 0 0 00:14:28.101 tests 23 23 23 0 0 00:14:28.101 asserts 152 152 152 0 n/a 00:14:28.101 00:14:28.101 Elapsed time = 1.211 seconds 00:14:28.359 21:36:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:28.359 21:36:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.359 21:36:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:28.359 21:36:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.359 21:36:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:28.360 21:36:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:28.360 21:36:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:28.360 21:36:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:14:28.360 21:36:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:28.360 21:36:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:14:28.360 21:36:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:28.360 21:36:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:28.360 rmmod nvme_tcp 00:14:28.360 rmmod nvme_fabrics 00:14:28.360 rmmod nvme_keyring 00:14:28.360 21:36:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:28.360 21:36:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:14:28.360 21:36:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:14:28.360 21:36:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 345348 ']' 00:14:28.360 21:36:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 345348 00:14:28.360 21:36:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 345348 ']' 00:14:28.360 21:36:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 345348 00:14:28.360 21:36:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:14:28.360 21:36:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:28.360 21:36:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 345348 00:14:28.360 21:36:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:14:28.360 21:36:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:14:28.360 21:36:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 345348' 00:14:28.360 killing process with pid 345348 00:14:28.360 21:36:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 345348 00:14:28.360 21:36:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 345348 00:14:28.619 21:36:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:28.619 21:36:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:28.619 21:36:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:28.619 21:36:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:28.619 21:36:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:28.619 21:36:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.619 21:36:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.619 21:36:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.155 21:36:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:31.155 00:14:31.155 real 0m5.836s 00:14:31.155 user 0m9.270s 00:14:31.155 sys 0m1.792s 00:14:31.155 21:36:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:31.155 21:36:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:31.155 ************************************ 00:14:31.155 END TEST nvmf_bdevio 00:14:31.155 ************************************ 00:14:31.155 21:36:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:31.155 21:36:21 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:31.155 21:36:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:31.155 21:36:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.155 21:36:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:31.155 ************************************ 00:14:31.155 START TEST nvmf_auth_target 00:14:31.155 ************************************ 00:14:31.155 21:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:31.155 * Looking for test storage... 00:14:31.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:31.155 21:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.155 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:31.155 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:31.156 21:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.529 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:32.529 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:32.529 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:32.529 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:32.529 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:32.529 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:32.529 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:32.529 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:32.529 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:32.529 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:14:32.529 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:32.529 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:14:32.529 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:32.529 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:14:32.529 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:32.529 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:32.529 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:32.529 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:32.529 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:32.529 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:14:32.530 Found 0000:08:00.0 (0x8086 - 0x159b) 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:14:32.530 Found 0000:08:00.1 (0x8086 - 0x159b) 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:14:32.530 Found net devices under 0000:08:00.0: cvl_0_0 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:14:32.530 Found net devices under 0000:08:00.1: cvl_0_1 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:32.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:32.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:14:32.530 00:14:32.530 --- 10.0.0.2 ping statistics --- 00:14:32.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.530 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:32.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:32.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:14:32.530 00:14:32.530 --- 10.0.0.1 ping statistics --- 00:14:32.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.530 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=347010 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 347010 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 347010 ']' 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:32.530 21:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=347030 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1deb40c494aed7b0f443bc722aea61ffbad842065f441bb6 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Iht 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1deb40c494aed7b0f443bc722aea61ffbad842065f441bb6 0 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1deb40c494aed7b0f443bc722aea61ffbad842065f441bb6 0 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1deb40c494aed7b0f443bc722aea61ffbad842065f441bb6 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Iht 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Iht 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Iht 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:33.096 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=513d47c3d3373decee09e547bac89b7cd1d59184656380c4f2cdef908842588b 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ffB 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 513d47c3d3373decee09e547bac89b7cd1d59184656380c4f2cdef908842588b 3 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 513d47c3d3373decee09e547bac89b7cd1d59184656380c4f2cdef908842588b 3 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=513d47c3d3373decee09e547bac89b7cd1d59184656380c4f2cdef908842588b 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ffB 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ffB 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.ffB 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f4fb578cda9ad2e8db8be26de4978a2f 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.GTn 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f4fb578cda9ad2e8db8be26de4978a2f 1 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f4fb578cda9ad2e8db8be26de4978a2f 1 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f4fb578cda9ad2e8db8be26de4978a2f 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.GTn 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.GTn 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.GTn 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=77677d2f6d4880c1a59b26bc1b1b4deb2182366a71f75261 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.nEh 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 77677d2f6d4880c1a59b26bc1b1b4deb2182366a71f75261 2 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 77677d2f6d4880c1a59b26bc1b1b4deb2182366a71f75261 2 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=77677d2f6d4880c1a59b26bc1b1b4deb2182366a71f75261 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.nEh 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.nEh 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.nEh 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dcfab07736c94b27f07aff1efe0083f194a0087ae01a193d 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.0w1 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dcfab07736c94b27f07aff1efe0083f194a0087ae01a193d 2 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dcfab07736c94b27f07aff1efe0083f194a0087ae01a193d 2 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dcfab07736c94b27f07aff1efe0083f194a0087ae01a193d 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:33.097 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:33.355 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.0w1 00:14:33.355 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.0w1 00:14:33.355 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.0w1 00:14:33.355 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:14:33.355 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:33.355 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.355 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:33.355 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a29df0bc53d18267629cf654b959b3b5 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.jjK 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a29df0bc53d18267629cf654b959b3b5 1 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a29df0bc53d18267629cf654b959b3b5 1 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a29df0bc53d18267629cf654b959b3b5 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.jjK 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.jjK 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.jjK 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1dd97fe389bae8e7325bc4f016df880280288553661380a2b9f114cc67c31638 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.vqJ 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1dd97fe389bae8e7325bc4f016df880280288553661380a2b9f114cc67c31638 3 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1dd97fe389bae8e7325bc4f016df880280288553661380a2b9f114cc67c31638 3 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1dd97fe389bae8e7325bc4f016df880280288553661380a2b9f114cc67c31638 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:33.356 21:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:33.356 21:36:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.vqJ 00:14:33.356 21:36:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.vqJ 00:14:33.356 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.vqJ 00:14:33.356 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:14:33.356 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 347010 00:14:33.356 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 347010 ']' 00:14:33.356 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.356 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.356 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.356 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.356 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.613 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:33.613 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:33.613 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 347030 /var/tmp/host.sock 00:14:33.613 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 347030 ']' 00:14:33.613 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:33.613 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.613 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:33.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:33.613 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.613 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.870 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:33.870 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:33.870 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:14:33.870 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.870 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.870 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.870 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:33.870 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Iht 00:14:33.870 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.870 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.870 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.870 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Iht 00:14:33.870 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Iht 00:14:34.127 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.ffB ]] 00:14:34.127 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ffB 00:14:34.127 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.127 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.127 21:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.127 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ffB 00:14:34.127 21:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ffB 00:14:34.385 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:34.385 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.GTn 00:14:34.385 21:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.385 21:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.385 21:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.385 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.GTn 00:14:34.385 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.GTn 00:14:34.642 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.nEh ]] 00:14:34.642 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.nEh 00:14:34.642 21:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.642 21:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.642 21:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.642 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.nEh 00:14:34.642 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.nEh 00:14:34.900 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:34.900 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.0w1 00:14:34.900 21:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.900 21:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.900 21:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.900 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.0w1 00:14:34.900 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.0w1 00:14:35.156 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.jjK ]] 00:14:35.156 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jjK 00:14:35.156 21:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.156 21:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.156 21:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.156 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jjK 00:14:35.156 21:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jjK 00:14:35.413 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:35.413 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.vqJ 00:14:35.413 21:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.413 21:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.413 21:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.413 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.vqJ 00:14:35.413 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.vqJ 00:14:35.670 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:14:35.670 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:35.670 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:35.670 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:35.670 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:35.670 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:35.926 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:14:35.926 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:35.926 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:35.926 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:35.926 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:35.926 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.926 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.926 21:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.926 21:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.926 21:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.926 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.926 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.182 00:14:36.182 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:36.182 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.182 21:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:36.439 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.439 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.439 21:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.439 21:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.439 21:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.439 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:36.439 { 00:14:36.439 "cntlid": 1, 00:14:36.439 "qid": 0, 00:14:36.439 "state": "enabled", 00:14:36.439 "thread": "nvmf_tgt_poll_group_000", 00:14:36.439 "listen_address": { 00:14:36.439 "trtype": "TCP", 00:14:36.439 "adrfam": "IPv4", 00:14:36.439 "traddr": "10.0.0.2", 00:14:36.439 "trsvcid": "4420" 00:14:36.439 }, 00:14:36.439 "peer_address": { 00:14:36.439 "trtype": "TCP", 00:14:36.439 "adrfam": "IPv4", 00:14:36.439 "traddr": "10.0.0.1", 00:14:36.439 "trsvcid": "33398" 00:14:36.439 }, 00:14:36.439 "auth": { 00:14:36.439 "state": "completed", 00:14:36.439 "digest": "sha256", 00:14:36.439 "dhgroup": "null" 00:14:36.439 } 00:14:36.439 } 00:14:36.439 ]' 00:14:36.439 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:36.439 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:36.439 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:36.694 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:36.694 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:36.694 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.694 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.695 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.951 21:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MWRlYjQwYzQ5NGFlZDdiMGY0NDNiYzcyMmFlYTYxZmZiYWQ4NDIwNjVmNDQxYmI2HlkFWQ==: --dhchap-ctrl-secret DHHC-1:03:NTEzZDQ3YzNkMzM3M2RlY2VlMDllNTQ3YmFjODliN2NkMWQ1OTE4NDY1NjM4MGM0ZjJjZGVmOTA4ODQyNTg4YuIE/eU=: 00:14:42.209 21:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.209 21:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:42.209 21:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.209 21:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.209 21:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.209 21:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:42.209 21:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:42.209 21:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:42.209 21:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:42.209 21:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:42.209 21:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:42.209 21:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:42.209 21:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:42.209 21:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.209 21:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.209 21:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.209 21:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.209 21:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.209 21:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.209 21:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.771 00:14:42.771 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.771 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.771 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.027 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.027 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.027 21:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.027 21:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.027 21:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.027 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:43.027 { 00:14:43.027 "cntlid": 3, 00:14:43.027 "qid": 0, 00:14:43.027 "state": "enabled", 00:14:43.027 "thread": "nvmf_tgt_poll_group_000", 00:14:43.027 "listen_address": { 00:14:43.027 "trtype": "TCP", 00:14:43.027 "adrfam": "IPv4", 00:14:43.027 "traddr": "10.0.0.2", 00:14:43.027 "trsvcid": "4420" 00:14:43.027 }, 00:14:43.027 "peer_address": { 00:14:43.027 "trtype": "TCP", 00:14:43.027 "adrfam": "IPv4", 00:14:43.027 "traddr": "10.0.0.1", 00:14:43.027 "trsvcid": "33418" 00:14:43.027 }, 00:14:43.027 "auth": { 00:14:43.027 "state": "completed", 00:14:43.027 "digest": "sha256", 00:14:43.027 "dhgroup": "null" 00:14:43.027 } 00:14:43.027 } 00:14:43.027 ]' 00:14:43.027 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:43.027 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.027 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:43.027 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:43.027 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:43.027 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.027 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.027 21:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.284 21:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjRmYjU3OGNkYTlhZDJlOGRiOGJlMjZkZTQ5NzhhMmbYGZ25: --dhchap-ctrl-secret DHHC-1:02:Nzc2NzdkMmY2ZDQ4ODBjMWE1OWIyNmJjMWIxYjRkZWIyMTgyMzY2YTcxZjc1MjYx6f8kzw==: 00:14:44.650 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.650 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:44.650 21:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.650 21:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.650 21:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.650 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:44.650 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:44.650 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:44.907 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:44.907 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:44.907 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:44.907 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:44.907 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:44.907 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.907 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.907 21:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.907 21:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.907 21:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.907 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.907 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.164 00:14:45.164 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:45.164 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:45.164 21:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.420 21:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.420 21:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.420 21:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.420 21:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.420 21:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.420 21:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:45.420 { 00:14:45.420 "cntlid": 5, 00:14:45.420 "qid": 0, 00:14:45.420 "state": "enabled", 00:14:45.420 "thread": "nvmf_tgt_poll_group_000", 00:14:45.420 "listen_address": { 00:14:45.420 "trtype": "TCP", 00:14:45.420 "adrfam": "IPv4", 00:14:45.420 "traddr": "10.0.0.2", 00:14:45.420 "trsvcid": "4420" 00:14:45.420 }, 00:14:45.420 "peer_address": { 00:14:45.420 "trtype": "TCP", 00:14:45.420 "adrfam": "IPv4", 00:14:45.420 "traddr": "10.0.0.1", 00:14:45.420 "trsvcid": "47790" 00:14:45.420 }, 00:14:45.420 "auth": { 00:14:45.420 "state": "completed", 00:14:45.420 "digest": "sha256", 00:14:45.420 "dhgroup": "null" 00:14:45.420 } 00:14:45.420 } 00:14:45.420 ]' 00:14:45.420 21:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:45.420 21:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.420 21:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.420 21:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:45.679 21:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.679 21:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.679 21:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.679 21:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.936 21:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZGNmYWIwNzczNmM5NGIyN2YwN2FmZjFlZmUwMDgzZjE5NGEwMDg3YWUwMWExOTNkGB+0Uw==: --dhchap-ctrl-secret DHHC-1:01:YTI5ZGYwYmM1M2QxODI2NzYyOWNmNjU0Yjk1OWIzYjWP0gKF: 00:14:46.868 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.868 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:46.868 21:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.868 21:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.868 21:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.868 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:46.868 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:46.868 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:47.127 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:47.127 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:47.127 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:47.127 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:47.127 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:47.127 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.127 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:14:47.127 21:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.127 21:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.127 21:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.127 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:47.127 21:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:47.723 00:14:47.723 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:47.723 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:47.723 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.029 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.029 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.029 21:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.029 21:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.029 21:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.029 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:48.029 { 00:14:48.029 "cntlid": 7, 00:14:48.029 "qid": 0, 00:14:48.029 "state": "enabled", 00:14:48.029 "thread": "nvmf_tgt_poll_group_000", 00:14:48.029 "listen_address": { 00:14:48.029 "trtype": "TCP", 00:14:48.029 "adrfam": "IPv4", 00:14:48.029 "traddr": "10.0.0.2", 00:14:48.029 "trsvcid": "4420" 00:14:48.029 }, 00:14:48.029 "peer_address": { 00:14:48.029 "trtype": "TCP", 00:14:48.029 "adrfam": "IPv4", 00:14:48.029 "traddr": "10.0.0.1", 00:14:48.029 "trsvcid": "47818" 00:14:48.029 }, 00:14:48.029 "auth": { 00:14:48.029 "state": "completed", 00:14:48.029 "digest": "sha256", 00:14:48.029 "dhgroup": "null" 00:14:48.029 } 00:14:48.029 } 00:14:48.029 ]' 00:14:48.029 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:48.029 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.029 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:48.029 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:48.029 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:48.029 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.029 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.029 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.320 21:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MWRkOTdmZTM4OWJhZThlNzMyNWJjNGYwMTZkZjg4MDI4MDI4ODU1MzY2MTM4MGEyYjlmMTE0Y2M2N2MzMTYzOBm7Qq0=: 00:14:49.304 21:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.304 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:49.304 21:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.304 21:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.304 21:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.304 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:49.304 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.304 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:49.304 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:49.562 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:49.562 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.562 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:49.562 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:49.562 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:49.562 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.562 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.562 21:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.562 21:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.562 21:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.562 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.562 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.126 00:14:50.126 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:50.126 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:50.126 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.384 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.384 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.384 21:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.384 21:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.384 21:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.384 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.384 { 00:14:50.384 "cntlid": 9, 00:14:50.384 "qid": 0, 00:14:50.384 "state": "enabled", 00:14:50.384 "thread": "nvmf_tgt_poll_group_000", 00:14:50.384 "listen_address": { 00:14:50.384 "trtype": "TCP", 00:14:50.384 "adrfam": "IPv4", 00:14:50.384 "traddr": "10.0.0.2", 00:14:50.384 "trsvcid": "4420" 00:14:50.384 }, 00:14:50.384 "peer_address": { 00:14:50.384 "trtype": "TCP", 00:14:50.384 "adrfam": "IPv4", 00:14:50.384 "traddr": "10.0.0.1", 00:14:50.384 "trsvcid": "47844" 00:14:50.384 }, 00:14:50.384 "auth": { 00:14:50.384 "state": "completed", 00:14:50.384 "digest": "sha256", 00:14:50.384 "dhgroup": "ffdhe2048" 00:14:50.384 } 00:14:50.384 } 00:14:50.384 ]' 00:14:50.384 21:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:50.384 21:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.384 21:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:50.384 21:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:50.384 21:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:50.384 21:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.384 21:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.384 21:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.641 21:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MWRlYjQwYzQ5NGFlZDdiMGY0NDNiYzcyMmFlYTYxZmZiYWQ4NDIwNjVmNDQxYmI2HlkFWQ==: --dhchap-ctrl-secret DHHC-1:03:NTEzZDQ3YzNkMzM3M2RlY2VlMDllNTQ3YmFjODliN2NkMWQ1OTE4NDY1NjM4MGM0ZjJjZGVmOTA4ODQyNTg4YuIE/eU=: 00:14:52.013 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.013 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:52.013 21:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.013 21:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.013 21:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.013 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:52.013 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:52.013 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:52.014 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:52.014 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.014 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:52.014 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:52.014 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:52.014 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.014 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.014 21:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.014 21:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.014 21:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.014 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.014 21:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.579 00:14:52.579 21:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:52.579 21:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:52.579 21:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.837 21:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.837 21:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.837 21:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.837 21:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.837 21:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.837 21:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:52.837 { 00:14:52.837 "cntlid": 11, 00:14:52.837 "qid": 0, 00:14:52.837 "state": "enabled", 00:14:52.837 "thread": "nvmf_tgt_poll_group_000", 00:14:52.837 "listen_address": { 00:14:52.837 "trtype": "TCP", 00:14:52.837 "adrfam": "IPv4", 00:14:52.837 "traddr": "10.0.0.2", 00:14:52.837 "trsvcid": "4420" 00:14:52.837 }, 00:14:52.837 "peer_address": { 00:14:52.837 "trtype": "TCP", 00:14:52.837 "adrfam": "IPv4", 00:14:52.837 "traddr": "10.0.0.1", 00:14:52.837 "trsvcid": "47870" 00:14:52.837 }, 00:14:52.837 "auth": { 00:14:52.837 "state": "completed", 00:14:52.837 "digest": "sha256", 00:14:52.837 "dhgroup": "ffdhe2048" 00:14:52.837 } 00:14:52.837 } 00:14:52.837 ]' 00:14:52.837 21:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:52.837 21:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.837 21:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:52.837 21:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:52.837 21:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:52.837 21:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.837 21:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.837 21:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.400 21:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjRmYjU3OGNkYTlhZDJlOGRiOGJlMjZkZTQ5NzhhMmbYGZ25: --dhchap-ctrl-secret DHHC-1:02:Nzc2NzdkMmY2ZDQ4ODBjMWE1OWIyNmJjMWIxYjRkZWIyMTgyMzY2YTcxZjc1MjYx6f8kzw==: 00:14:54.331 21:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.331 21:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:54.331 21:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.331 21:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.331 21:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.331 21:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.331 21:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:54.331 21:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:54.589 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:54.589 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:54.589 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:54.589 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:54.589 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:54.589 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.589 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.589 21:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.589 21:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.589 21:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.589 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.589 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.846 00:14:54.846 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:54.846 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:54.846 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.411 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.411 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.411 21:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.411 21:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.411 21:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.411 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:55.411 { 00:14:55.411 "cntlid": 13, 00:14:55.411 "qid": 0, 00:14:55.411 "state": "enabled", 00:14:55.411 "thread": "nvmf_tgt_poll_group_000", 00:14:55.411 "listen_address": { 00:14:55.411 "trtype": "TCP", 00:14:55.411 "adrfam": "IPv4", 00:14:55.411 "traddr": "10.0.0.2", 00:14:55.411 "trsvcid": "4420" 00:14:55.411 }, 00:14:55.411 "peer_address": { 00:14:55.411 "trtype": "TCP", 00:14:55.411 "adrfam": "IPv4", 00:14:55.411 "traddr": "10.0.0.1", 00:14:55.411 "trsvcid": "47902" 00:14:55.411 }, 00:14:55.411 "auth": { 00:14:55.411 "state": "completed", 00:14:55.411 "digest": "sha256", 00:14:55.411 "dhgroup": "ffdhe2048" 00:14:55.411 } 00:14:55.411 } 00:14:55.411 ]' 00:14:55.411 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:55.411 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:55.411 21:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:55.411 21:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:55.411 21:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:55.411 21:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.411 21:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.411 21:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.669 21:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZGNmYWIwNzczNmM5NGIyN2YwN2FmZjFlZmUwMDgzZjE5NGEwMDg3YWUwMWExOTNkGB+0Uw==: --dhchap-ctrl-secret DHHC-1:01:YTI5ZGYwYmM1M2QxODI2NzYyOWNmNjU0Yjk1OWIzYjWP0gKF: 00:14:56.601 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.601 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:56.601 21:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.601 21:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.601 21:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.601 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:56.601 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:56.601 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:56.859 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:56.859 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:56.859 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:56.859 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:56.859 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:56.859 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.859 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:14:56.859 21:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.859 21:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.859 21:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.859 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:56.859 21:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:57.423 00:14:57.423 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:57.423 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:57.423 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.681 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.681 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.681 21:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.681 21:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.681 21:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.681 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:57.681 { 00:14:57.681 "cntlid": 15, 00:14:57.681 "qid": 0, 00:14:57.681 "state": "enabled", 00:14:57.681 "thread": "nvmf_tgt_poll_group_000", 00:14:57.681 "listen_address": { 00:14:57.681 "trtype": "TCP", 00:14:57.681 "adrfam": "IPv4", 00:14:57.681 "traddr": "10.0.0.2", 00:14:57.681 "trsvcid": "4420" 00:14:57.681 }, 00:14:57.681 "peer_address": { 00:14:57.681 "trtype": "TCP", 00:14:57.681 "adrfam": "IPv4", 00:14:57.681 "traddr": "10.0.0.1", 00:14:57.681 "trsvcid": "37668" 00:14:57.681 }, 00:14:57.681 "auth": { 00:14:57.681 "state": "completed", 00:14:57.681 "digest": "sha256", 00:14:57.681 "dhgroup": "ffdhe2048" 00:14:57.681 } 00:14:57.681 } 00:14:57.681 ]' 00:14:57.681 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:57.681 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:57.681 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:57.681 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:57.681 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:57.681 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.681 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.681 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.939 21:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MWRkOTdmZTM4OWJhZThlNzMyNWJjNGYwMTZkZjg4MDI4MDI4ODU1MzY2MTM4MGEyYjlmMTE0Y2M2N2MzMTYzOBm7Qq0=: 00:14:58.872 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.872 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:58.872 21:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.872 21:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.872 21:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.872 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:58.872 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:58.872 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:58.872 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:59.437 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:59.437 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:59.437 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:59.437 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:59.437 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:59.437 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.437 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.437 21:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.437 21:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.437 21:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.437 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.437 21:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.694 00:14:59.694 21:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.694 21:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.694 21:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.951 21:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.951 21:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.951 21:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.951 21:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.951 21:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.951 21:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:59.951 { 00:14:59.951 "cntlid": 17, 00:14:59.951 "qid": 0, 00:14:59.951 "state": "enabled", 00:14:59.951 "thread": "nvmf_tgt_poll_group_000", 00:14:59.951 "listen_address": { 00:14:59.951 "trtype": "TCP", 00:14:59.951 "adrfam": "IPv4", 00:14:59.951 "traddr": "10.0.0.2", 00:14:59.951 "trsvcid": "4420" 00:14:59.951 }, 00:14:59.951 "peer_address": { 00:14:59.951 "trtype": "TCP", 00:14:59.951 "adrfam": "IPv4", 00:14:59.951 "traddr": "10.0.0.1", 00:14:59.951 "trsvcid": "37696" 00:14:59.951 }, 00:14:59.951 "auth": { 00:14:59.951 "state": "completed", 00:14:59.951 "digest": "sha256", 00:14:59.951 "dhgroup": "ffdhe3072" 00:14:59.951 } 00:14:59.951 } 00:14:59.951 ]' 00:14:59.951 21:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:59.951 21:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:59.952 21:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:59.952 21:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:59.952 21:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:00.209 21:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.209 21:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.209 21:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.467 21:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MWRlYjQwYzQ5NGFlZDdiMGY0NDNiYzcyMmFlYTYxZmZiYWQ4NDIwNjVmNDQxYmI2HlkFWQ==: --dhchap-ctrl-secret DHHC-1:03:NTEzZDQ3YzNkMzM3M2RlY2VlMDllNTQ3YmFjODliN2NkMWQ1OTE4NDY1NjM4MGM0ZjJjZGVmOTA4ODQyNTg4YuIE/eU=: 00:15:01.399 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.399 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:01.399 21:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.399 21:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.399 21:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.399 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:01.399 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:01.399 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:01.657 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:15:01.657 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:01.657 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:01.657 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:01.657 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:01.657 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.657 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.657 21:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.657 21:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.657 21:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.657 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.657 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.224 00:15:02.224 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:02.224 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:02.224 21:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.482 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.482 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.482 21:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.482 21:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.482 21:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.482 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:02.482 { 00:15:02.482 "cntlid": 19, 00:15:02.482 "qid": 0, 00:15:02.482 "state": "enabled", 00:15:02.482 "thread": "nvmf_tgt_poll_group_000", 00:15:02.482 "listen_address": { 00:15:02.482 "trtype": "TCP", 00:15:02.482 "adrfam": "IPv4", 00:15:02.482 "traddr": "10.0.0.2", 00:15:02.482 "trsvcid": "4420" 00:15:02.482 }, 00:15:02.482 "peer_address": { 00:15:02.482 "trtype": "TCP", 00:15:02.482 "adrfam": "IPv4", 00:15:02.482 "traddr": "10.0.0.1", 00:15:02.482 "trsvcid": "37722" 00:15:02.482 }, 00:15:02.482 "auth": { 00:15:02.482 "state": "completed", 00:15:02.482 "digest": "sha256", 00:15:02.482 "dhgroup": "ffdhe3072" 00:15:02.482 } 00:15:02.482 } 00:15:02.482 ]' 00:15:02.482 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:02.482 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:02.482 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:02.482 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:02.482 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:02.482 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.482 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.482 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.044 21:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjRmYjU3OGNkYTlhZDJlOGRiOGJlMjZkZTQ5NzhhMmbYGZ25: --dhchap-ctrl-secret DHHC-1:02:Nzc2NzdkMmY2ZDQ4ODBjMWE1OWIyNmJjMWIxYjRkZWIyMTgyMzY2YTcxZjc1MjYx6f8kzw==: 00:15:03.973 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.973 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:03.973 21:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.973 21:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.973 21:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.973 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:03.973 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:03.973 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:04.231 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:15:04.231 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.231 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:04.231 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:04.231 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:04.231 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.231 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.231 21:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.231 21:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.231 21:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.231 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.231 21:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.488 00:15:04.488 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:04.488 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:04.488 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.745 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.745 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.745 21:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.745 21:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.745 21:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.745 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:04.745 { 00:15:04.745 "cntlid": 21, 00:15:04.745 "qid": 0, 00:15:04.745 "state": "enabled", 00:15:04.745 "thread": "nvmf_tgt_poll_group_000", 00:15:04.745 "listen_address": { 00:15:04.745 "trtype": "TCP", 00:15:04.745 "adrfam": "IPv4", 00:15:04.745 "traddr": "10.0.0.2", 00:15:04.745 "trsvcid": "4420" 00:15:04.745 }, 00:15:04.745 "peer_address": { 00:15:04.745 "trtype": "TCP", 00:15:04.745 "adrfam": "IPv4", 00:15:04.745 "traddr": "10.0.0.1", 00:15:04.745 "trsvcid": "37738" 00:15:04.745 }, 00:15:04.745 "auth": { 00:15:04.745 "state": "completed", 00:15:04.745 "digest": "sha256", 00:15:04.745 "dhgroup": "ffdhe3072" 00:15:04.745 } 00:15:04.745 } 00:15:04.745 ]' 00:15:04.745 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:05.003 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.003 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:05.003 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:05.003 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:05.003 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.003 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.003 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.260 21:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZGNmYWIwNzczNmM5NGIyN2YwN2FmZjFlZmUwMDgzZjE5NGEwMDg3YWUwMWExOTNkGB+0Uw==: --dhchap-ctrl-secret DHHC-1:01:YTI5ZGYwYmM1M2QxODI2NzYyOWNmNjU0Yjk1OWIzYjWP0gKF: 00:15:06.187 21:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.188 21:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:06.188 21:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.188 21:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.188 21:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.188 21:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:06.188 21:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:06.188 21:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:06.749 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:15:06.749 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.749 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:06.749 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:06.749 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:06.749 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.749 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:15:06.749 21:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.749 21:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.749 21:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.749 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:06.749 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:07.005 00:15:07.005 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.005 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.005 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.262 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.262 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.262 21:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.262 21:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.262 21:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.262 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.262 { 00:15:07.262 "cntlid": 23, 00:15:07.262 "qid": 0, 00:15:07.262 "state": "enabled", 00:15:07.262 "thread": "nvmf_tgt_poll_group_000", 00:15:07.262 "listen_address": { 00:15:07.262 "trtype": "TCP", 00:15:07.262 "adrfam": "IPv4", 00:15:07.262 "traddr": "10.0.0.2", 00:15:07.262 "trsvcid": "4420" 00:15:07.262 }, 00:15:07.262 "peer_address": { 00:15:07.262 "trtype": "TCP", 00:15:07.262 "adrfam": "IPv4", 00:15:07.262 "traddr": "10.0.0.1", 00:15:07.262 "trsvcid": "53518" 00:15:07.262 }, 00:15:07.262 "auth": { 00:15:07.262 "state": "completed", 00:15:07.262 "digest": "sha256", 00:15:07.262 "dhgroup": "ffdhe3072" 00:15:07.262 } 00:15:07.262 } 00:15:07.262 ]' 00:15:07.262 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.262 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:07.262 21:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.262 21:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:07.262 21:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.519 21:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.519 21:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.519 21:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.775 21:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MWRkOTdmZTM4OWJhZThlNzMyNWJjNGYwMTZkZjg4MDI4MDI4ODU1MzY2MTM4MGEyYjlmMTE0Y2M2N2MzMTYzOBm7Qq0=: 00:15:08.704 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.704 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:08.704 21:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.704 21:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.704 21:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.704 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:08.704 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:08.704 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:08.704 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:08.961 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:15:08.961 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:08.961 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:08.961 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:08.961 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:08.961 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.961 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.961 21:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.961 21:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.961 21:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.961 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.962 21:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.526 00:15:09.526 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.526 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.526 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.782 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.782 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.782 21:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.782 21:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.782 21:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.782 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:09.782 { 00:15:09.782 "cntlid": 25, 00:15:09.782 "qid": 0, 00:15:09.782 "state": "enabled", 00:15:09.782 "thread": "nvmf_tgt_poll_group_000", 00:15:09.782 "listen_address": { 00:15:09.782 "trtype": "TCP", 00:15:09.782 "adrfam": "IPv4", 00:15:09.782 "traddr": "10.0.0.2", 00:15:09.782 "trsvcid": "4420" 00:15:09.782 }, 00:15:09.782 "peer_address": { 00:15:09.782 "trtype": "TCP", 00:15:09.782 "adrfam": "IPv4", 00:15:09.782 "traddr": "10.0.0.1", 00:15:09.782 "trsvcid": "53554" 00:15:09.782 }, 00:15:09.782 "auth": { 00:15:09.782 "state": "completed", 00:15:09.782 "digest": "sha256", 00:15:09.782 "dhgroup": "ffdhe4096" 00:15:09.782 } 00:15:09.782 } 00:15:09.782 ]' 00:15:09.782 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:09.782 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:09.782 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:09.782 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:09.782 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:09.782 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.782 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.782 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.346 21:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MWRlYjQwYzQ5NGFlZDdiMGY0NDNiYzcyMmFlYTYxZmZiYWQ4NDIwNjVmNDQxYmI2HlkFWQ==: --dhchap-ctrl-secret DHHC-1:03:NTEzZDQ3YzNkMzM3M2RlY2VlMDllNTQ3YmFjODliN2NkMWQ1OTE4NDY1NjM4MGM0ZjJjZGVmOTA4ODQyNTg4YuIE/eU=: 00:15:11.279 21:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.279 21:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:11.279 21:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.279 21:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.279 21:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.279 21:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:11.279 21:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:11.279 21:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:11.536 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:15:11.536 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:11.536 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:11.536 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:11.537 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:11.537 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.537 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.537 21:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.537 21:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.537 21:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.537 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.537 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.794 00:15:12.052 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.052 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.052 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.052 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.052 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.052 21:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.053 21:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.310 21:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.310 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.310 { 00:15:12.310 "cntlid": 27, 00:15:12.310 "qid": 0, 00:15:12.310 "state": "enabled", 00:15:12.310 "thread": "nvmf_tgt_poll_group_000", 00:15:12.310 "listen_address": { 00:15:12.310 "trtype": "TCP", 00:15:12.310 "adrfam": "IPv4", 00:15:12.310 "traddr": "10.0.0.2", 00:15:12.310 "trsvcid": "4420" 00:15:12.310 }, 00:15:12.310 "peer_address": { 00:15:12.310 "trtype": "TCP", 00:15:12.310 "adrfam": "IPv4", 00:15:12.310 "traddr": "10.0.0.1", 00:15:12.310 "trsvcid": "53588" 00:15:12.310 }, 00:15:12.310 "auth": { 00:15:12.310 "state": "completed", 00:15:12.310 "digest": "sha256", 00:15:12.310 "dhgroup": "ffdhe4096" 00:15:12.310 } 00:15:12.310 } 00:15:12.310 ]' 00:15:12.310 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.310 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:12.310 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:12.310 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:12.310 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:12.310 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.310 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.310 21:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.568 21:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjRmYjU3OGNkYTlhZDJlOGRiOGJlMjZkZTQ5NzhhMmbYGZ25: --dhchap-ctrl-secret DHHC-1:02:Nzc2NzdkMmY2ZDQ4ODBjMWE1OWIyNmJjMWIxYjRkZWIyMTgyMzY2YTcxZjc1MjYx6f8kzw==: 00:15:13.964 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.964 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:13.964 21:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.964 21:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.964 21:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.964 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:13.964 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:13.964 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:13.964 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:15:13.964 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:13.964 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:13.964 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:13.964 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:13.964 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.964 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.964 21:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.964 21:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.964 21:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.964 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.964 21:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.529 00:15:14.529 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.529 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.529 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.786 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.786 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.786 21:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.786 21:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.786 21:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.786 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.786 { 00:15:14.786 "cntlid": 29, 00:15:14.786 "qid": 0, 00:15:14.786 "state": "enabled", 00:15:14.786 "thread": "nvmf_tgt_poll_group_000", 00:15:14.786 "listen_address": { 00:15:14.786 "trtype": "TCP", 00:15:14.786 "adrfam": "IPv4", 00:15:14.786 "traddr": "10.0.0.2", 00:15:14.786 "trsvcid": "4420" 00:15:14.786 }, 00:15:14.786 "peer_address": { 00:15:14.786 "trtype": "TCP", 00:15:14.786 "adrfam": "IPv4", 00:15:14.786 "traddr": "10.0.0.1", 00:15:14.786 "trsvcid": "53626" 00:15:14.786 }, 00:15:14.786 "auth": { 00:15:14.786 "state": "completed", 00:15:14.786 "digest": "sha256", 00:15:14.786 "dhgroup": "ffdhe4096" 00:15:14.786 } 00:15:14.786 } 00:15:14.786 ]' 00:15:14.786 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.786 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.786 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:14.786 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:14.786 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:14.786 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.786 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.786 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.044 21:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZGNmYWIwNzczNmM5NGIyN2YwN2FmZjFlZmUwMDgzZjE5NGEwMDg3YWUwMWExOTNkGB+0Uw==: --dhchap-ctrl-secret DHHC-1:01:YTI5ZGYwYmM1M2QxODI2NzYyOWNmNjU0Yjk1OWIzYjWP0gKF: 00:15:16.415 21:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.415 21:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:16.415 21:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.415 21:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.415 21:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.415 21:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:16.415 21:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:16.415 21:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:16.415 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:15:16.415 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:16.415 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:16.415 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:16.415 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:16.415 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.415 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:15:16.415 21:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.415 21:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.415 21:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.415 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:16.415 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:16.979 00:15:16.979 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.979 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.979 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.237 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.237 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.237 21:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.237 21:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.237 21:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.237 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:17.237 { 00:15:17.237 "cntlid": 31, 00:15:17.237 "qid": 0, 00:15:17.237 "state": "enabled", 00:15:17.237 "thread": "nvmf_tgt_poll_group_000", 00:15:17.237 "listen_address": { 00:15:17.237 "trtype": "TCP", 00:15:17.237 "adrfam": "IPv4", 00:15:17.237 "traddr": "10.0.0.2", 00:15:17.237 "trsvcid": "4420" 00:15:17.237 }, 00:15:17.237 "peer_address": { 00:15:17.237 "trtype": "TCP", 00:15:17.237 "adrfam": "IPv4", 00:15:17.237 "traddr": "10.0.0.1", 00:15:17.237 "trsvcid": "56080" 00:15:17.237 }, 00:15:17.237 "auth": { 00:15:17.237 "state": "completed", 00:15:17.237 "digest": "sha256", 00:15:17.237 "dhgroup": "ffdhe4096" 00:15:17.237 } 00:15:17.237 } 00:15:17.237 ]' 00:15:17.237 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:17.237 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.237 21:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:17.237 21:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:17.237 21:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:17.495 21:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.495 21:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.495 21:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.751 21:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MWRkOTdmZTM4OWJhZThlNzMyNWJjNGYwMTZkZjg4MDI4MDI4ODU1MzY2MTM4MGEyYjlmMTE0Y2M2N2MzMTYzOBm7Qq0=: 00:15:18.682 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.682 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:18.682 21:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.682 21:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.682 21:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.682 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:18.682 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:18.682 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:18.682 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:18.939 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:15:18.939 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:18.939 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:18.939 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:18.939 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:18.939 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.939 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.939 21:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.939 21:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.939 21:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.939 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.939 21:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.503 00:15:19.503 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:19.503 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:19.503 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.760 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.760 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.760 21:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.760 21:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.760 21:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.017 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.017 { 00:15:20.017 "cntlid": 33, 00:15:20.017 "qid": 0, 00:15:20.017 "state": "enabled", 00:15:20.017 "thread": "nvmf_tgt_poll_group_000", 00:15:20.017 "listen_address": { 00:15:20.017 "trtype": "TCP", 00:15:20.017 "adrfam": "IPv4", 00:15:20.017 "traddr": "10.0.0.2", 00:15:20.017 "trsvcid": "4420" 00:15:20.017 }, 00:15:20.017 "peer_address": { 00:15:20.017 "trtype": "TCP", 00:15:20.017 "adrfam": "IPv4", 00:15:20.017 "traddr": "10.0.0.1", 00:15:20.017 "trsvcid": "56118" 00:15:20.017 }, 00:15:20.017 "auth": { 00:15:20.017 "state": "completed", 00:15:20.017 "digest": "sha256", 00:15:20.017 "dhgroup": "ffdhe6144" 00:15:20.017 } 00:15:20.017 } 00:15:20.017 ]' 00:15:20.017 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.017 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.017 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:20.017 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:20.017 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:20.017 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.017 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.017 21:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.274 21:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MWRlYjQwYzQ5NGFlZDdiMGY0NDNiYzcyMmFlYTYxZmZiYWQ4NDIwNjVmNDQxYmI2HlkFWQ==: --dhchap-ctrl-secret DHHC-1:03:NTEzZDQ3YzNkMzM3M2RlY2VlMDllNTQ3YmFjODliN2NkMWQ1OTE4NDY1NjM4MGM0ZjJjZGVmOTA4ODQyNTg4YuIE/eU=: 00:15:21.649 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.649 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:21.649 21:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.649 21:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.649 21:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.649 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:21.649 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:21.649 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:21.649 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:15:21.649 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:21.649 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:21.649 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:21.649 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:21.649 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.649 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.649 21:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.649 21:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.649 21:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.649 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.650 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.214 00:15:22.214 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:22.214 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:22.214 21:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.471 21:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.471 21:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.471 21:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.471 21:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.471 21:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.471 21:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:22.471 { 00:15:22.471 "cntlid": 35, 00:15:22.471 "qid": 0, 00:15:22.471 "state": "enabled", 00:15:22.471 "thread": "nvmf_tgt_poll_group_000", 00:15:22.471 "listen_address": { 00:15:22.471 "trtype": "TCP", 00:15:22.471 "adrfam": "IPv4", 00:15:22.471 "traddr": "10.0.0.2", 00:15:22.471 "trsvcid": "4420" 00:15:22.471 }, 00:15:22.471 "peer_address": { 00:15:22.471 "trtype": "TCP", 00:15:22.471 "adrfam": "IPv4", 00:15:22.471 "traddr": "10.0.0.1", 00:15:22.471 "trsvcid": "56152" 00:15:22.471 }, 00:15:22.471 "auth": { 00:15:22.471 "state": "completed", 00:15:22.471 "digest": "sha256", 00:15:22.471 "dhgroup": "ffdhe6144" 00:15:22.471 } 00:15:22.471 } 00:15:22.471 ]' 00:15:22.471 21:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:22.471 21:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.471 21:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:22.727 21:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:22.727 21:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:22.727 21:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.727 21:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.727 21:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.983 21:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjRmYjU3OGNkYTlhZDJlOGRiOGJlMjZkZTQ5NzhhMmbYGZ25: --dhchap-ctrl-secret DHHC-1:02:Nzc2NzdkMmY2ZDQ4ODBjMWE1OWIyNmJjMWIxYjRkZWIyMTgyMzY2YTcxZjc1MjYx6f8kzw==: 00:15:23.912 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.912 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:23.912 21:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.912 21:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.912 21:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.912 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:23.912 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:23.912 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:24.477 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:15:24.477 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:24.477 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:24.477 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:24.477 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:24.477 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.477 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.477 21:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.477 21:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.477 21:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.477 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.477 21:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.040 00:15:25.040 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:25.040 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:25.040 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.297 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.298 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.298 21:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.298 21:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.298 21:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.298 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:25.298 { 00:15:25.298 "cntlid": 37, 00:15:25.298 "qid": 0, 00:15:25.298 "state": "enabled", 00:15:25.298 "thread": "nvmf_tgt_poll_group_000", 00:15:25.298 "listen_address": { 00:15:25.298 "trtype": "TCP", 00:15:25.298 "adrfam": "IPv4", 00:15:25.298 "traddr": "10.0.0.2", 00:15:25.298 "trsvcid": "4420" 00:15:25.298 }, 00:15:25.298 "peer_address": { 00:15:25.298 "trtype": "TCP", 00:15:25.298 "adrfam": "IPv4", 00:15:25.298 "traddr": "10.0.0.1", 00:15:25.298 "trsvcid": "56180" 00:15:25.298 }, 00:15:25.298 "auth": { 00:15:25.298 "state": "completed", 00:15:25.298 "digest": "sha256", 00:15:25.298 "dhgroup": "ffdhe6144" 00:15:25.298 } 00:15:25.298 } 00:15:25.298 ]' 00:15:25.298 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:25.298 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:25.298 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:25.298 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:25.298 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:25.298 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.298 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.298 21:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.555 21:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZGNmYWIwNzczNmM5NGIyN2YwN2FmZjFlZmUwMDgzZjE5NGEwMDg3YWUwMWExOTNkGB+0Uw==: --dhchap-ctrl-secret DHHC-1:01:YTI5ZGYwYmM1M2QxODI2NzYyOWNmNjU0Yjk1OWIzYjWP0gKF: 00:15:26.926 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.926 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:26.926 21:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.926 21:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.926 21:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.926 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:26.926 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:26.926 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:26.926 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:15:26.926 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:26.926 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:26.926 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:26.926 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:26.926 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.926 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:15:26.926 21:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.926 21:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.926 21:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.926 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:26.926 21:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:27.491 00:15:27.491 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:27.491 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:27.491 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.056 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.056 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.056 21:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.056 21:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.056 21:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.056 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:28.056 { 00:15:28.056 "cntlid": 39, 00:15:28.056 "qid": 0, 00:15:28.056 "state": "enabled", 00:15:28.056 "thread": "nvmf_tgt_poll_group_000", 00:15:28.056 "listen_address": { 00:15:28.056 "trtype": "TCP", 00:15:28.056 "adrfam": "IPv4", 00:15:28.056 "traddr": "10.0.0.2", 00:15:28.056 "trsvcid": "4420" 00:15:28.056 }, 00:15:28.056 "peer_address": { 00:15:28.056 "trtype": "TCP", 00:15:28.056 "adrfam": "IPv4", 00:15:28.056 "traddr": "10.0.0.1", 00:15:28.056 "trsvcid": "52544" 00:15:28.056 }, 00:15:28.056 "auth": { 00:15:28.056 "state": "completed", 00:15:28.056 "digest": "sha256", 00:15:28.056 "dhgroup": "ffdhe6144" 00:15:28.056 } 00:15:28.056 } 00:15:28.056 ]' 00:15:28.056 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.056 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:28.056 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.056 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:28.056 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.056 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.056 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.056 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.313 21:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MWRkOTdmZTM4OWJhZThlNzMyNWJjNGYwMTZkZjg4MDI4MDI4ODU1MzY2MTM4MGEyYjlmMTE0Y2M2N2MzMTYzOBm7Qq0=: 00:15:29.244 21:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.244 21:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:29.244 21:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.244 21:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.244 21:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.244 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:29.244 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:29.244 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:29.244 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:29.807 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:15:29.807 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:29.807 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:29.807 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:29.807 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:29.807 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.807 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.807 21:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.807 21:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.807 21:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.807 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.807 21:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.737 00:15:30.737 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:30.737 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:30.737 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.737 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.737 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.737 21:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.737 21:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.737 21:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.737 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:30.737 { 00:15:30.737 "cntlid": 41, 00:15:30.737 "qid": 0, 00:15:30.737 "state": "enabled", 00:15:30.737 "thread": "nvmf_tgt_poll_group_000", 00:15:30.737 "listen_address": { 00:15:30.737 "trtype": "TCP", 00:15:30.737 "adrfam": "IPv4", 00:15:30.737 "traddr": "10.0.0.2", 00:15:30.737 "trsvcid": "4420" 00:15:30.737 }, 00:15:30.737 "peer_address": { 00:15:30.737 "trtype": "TCP", 00:15:30.737 "adrfam": "IPv4", 00:15:30.737 "traddr": "10.0.0.1", 00:15:30.737 "trsvcid": "52574" 00:15:30.737 }, 00:15:30.737 "auth": { 00:15:30.737 "state": "completed", 00:15:30.737 "digest": "sha256", 00:15:30.737 "dhgroup": "ffdhe8192" 00:15:30.737 } 00:15:30.737 } 00:15:30.737 ]' 00:15:30.737 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:30.993 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:30.993 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:30.993 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:30.993 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:30.993 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.993 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.993 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.250 21:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MWRlYjQwYzQ5NGFlZDdiMGY0NDNiYzcyMmFlYTYxZmZiYWQ4NDIwNjVmNDQxYmI2HlkFWQ==: --dhchap-ctrl-secret DHHC-1:03:NTEzZDQ3YzNkMzM3M2RlY2VlMDllNTQ3YmFjODliN2NkMWQ1OTE4NDY1NjM4MGM0ZjJjZGVmOTA4ODQyNTg4YuIE/eU=: 00:15:32.178 21:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.435 21:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:32.435 21:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.435 21:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.435 21:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.435 21:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:32.435 21:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:32.435 21:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:32.691 21:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:15:32.691 21:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:32.691 21:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:32.691 21:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:32.691 21:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:32.691 21:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.691 21:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.691 21:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.691 21:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.691 21:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.691 21:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.691 21:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.619 00:15:33.619 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:33.619 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:33.619 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.877 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.877 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.877 21:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.877 21:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.877 21:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.877 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:33.877 { 00:15:33.877 "cntlid": 43, 00:15:33.877 "qid": 0, 00:15:33.877 "state": "enabled", 00:15:33.877 "thread": "nvmf_tgt_poll_group_000", 00:15:33.877 "listen_address": { 00:15:33.877 "trtype": "TCP", 00:15:33.877 "adrfam": "IPv4", 00:15:33.877 "traddr": "10.0.0.2", 00:15:33.877 "trsvcid": "4420" 00:15:33.877 }, 00:15:33.877 "peer_address": { 00:15:33.877 "trtype": "TCP", 00:15:33.877 "adrfam": "IPv4", 00:15:33.877 "traddr": "10.0.0.1", 00:15:33.877 "trsvcid": "52594" 00:15:33.877 }, 00:15:33.877 "auth": { 00:15:33.877 "state": "completed", 00:15:33.877 "digest": "sha256", 00:15:33.877 "dhgroup": "ffdhe8192" 00:15:33.877 } 00:15:33.877 } 00:15:33.877 ]' 00:15:33.877 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:33.877 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:33.877 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.877 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:33.877 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:33.877 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.877 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.877 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.134 21:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjRmYjU3OGNkYTlhZDJlOGRiOGJlMjZkZTQ5NzhhMmbYGZ25: --dhchap-ctrl-secret DHHC-1:02:Nzc2NzdkMmY2ZDQ4ODBjMWE1OWIyNmJjMWIxYjRkZWIyMTgyMzY2YTcxZjc1MjYx6f8kzw==: 00:15:35.067 21:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.325 21:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:35.325 21:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.325 21:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.325 21:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.325 21:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:35.325 21:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:35.325 21:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:35.582 21:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:15:35.582 21:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:35.582 21:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:35.582 21:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:35.582 21:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:35.582 21:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.582 21:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.582 21:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.582 21:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.582 21:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.582 21:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.582 21:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.514 00:15:36.514 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:36.514 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:36.514 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.771 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.771 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.771 21:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.771 21:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.771 21:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.771 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:36.771 { 00:15:36.771 "cntlid": 45, 00:15:36.771 "qid": 0, 00:15:36.771 "state": "enabled", 00:15:36.771 "thread": "nvmf_tgt_poll_group_000", 00:15:36.771 "listen_address": { 00:15:36.771 "trtype": "TCP", 00:15:36.771 "adrfam": "IPv4", 00:15:36.771 "traddr": "10.0.0.2", 00:15:36.771 "trsvcid": "4420" 00:15:36.771 }, 00:15:36.771 "peer_address": { 00:15:36.771 "trtype": "TCP", 00:15:36.771 "adrfam": "IPv4", 00:15:36.771 "traddr": "10.0.0.1", 00:15:36.771 "trsvcid": "48054" 00:15:36.771 }, 00:15:36.771 "auth": { 00:15:36.771 "state": "completed", 00:15:36.771 "digest": "sha256", 00:15:36.771 "dhgroup": "ffdhe8192" 00:15:36.771 } 00:15:36.771 } 00:15:36.771 ]' 00:15:36.771 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:36.771 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:36.771 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:36.771 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:36.771 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:36.771 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.771 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.771 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.027 21:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZGNmYWIwNzczNmM5NGIyN2YwN2FmZjFlZmUwMDgzZjE5NGEwMDg3YWUwMWExOTNkGB+0Uw==: --dhchap-ctrl-secret DHHC-1:01:YTI5ZGYwYmM1M2QxODI2NzYyOWNmNjU0Yjk1OWIzYjWP0gKF: 00:15:38.398 21:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.398 21:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:38.398 21:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.398 21:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.398 21:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.398 21:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.398 21:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:38.398 21:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:38.398 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:15:38.398 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.398 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:38.398 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:38.398 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:38.398 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.398 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:15:38.398 21:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.398 21:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.398 21:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.398 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:38.398 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:39.331 00:15:39.331 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.331 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.331 21:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.588 21:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.588 21:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.588 21:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.588 21:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.588 21:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.588 21:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.588 { 00:15:39.588 "cntlid": 47, 00:15:39.588 "qid": 0, 00:15:39.588 "state": "enabled", 00:15:39.588 "thread": "nvmf_tgt_poll_group_000", 00:15:39.588 "listen_address": { 00:15:39.588 "trtype": "TCP", 00:15:39.588 "adrfam": "IPv4", 00:15:39.588 "traddr": "10.0.0.2", 00:15:39.588 "trsvcid": "4420" 00:15:39.588 }, 00:15:39.588 "peer_address": { 00:15:39.588 "trtype": "TCP", 00:15:39.588 "adrfam": "IPv4", 00:15:39.588 "traddr": "10.0.0.1", 00:15:39.588 "trsvcid": "48066" 00:15:39.588 }, 00:15:39.588 "auth": { 00:15:39.588 "state": "completed", 00:15:39.588 "digest": "sha256", 00:15:39.588 "dhgroup": "ffdhe8192" 00:15:39.588 } 00:15:39.588 } 00:15:39.588 ]' 00:15:39.588 21:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.588 21:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.588 21:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.588 21:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:39.588 21:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.845 21:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.845 21:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.845 21:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.102 21:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MWRkOTdmZTM4OWJhZThlNzMyNWJjNGYwMTZkZjg4MDI4MDI4ODU1MzY2MTM4MGEyYjlmMTE0Y2M2N2MzMTYzOBm7Qq0=: 00:15:41.034 21:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.034 21:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:41.034 21:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.034 21:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.034 21:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.034 21:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:41.034 21:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:41.034 21:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:41.034 21:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:41.034 21:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:41.292 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:15:41.292 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:41.292 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:41.292 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:41.292 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:41.292 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.292 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.292 21:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.292 21:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.292 21:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.292 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.292 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.860 00:15:41.860 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.860 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:41.860 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.118 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.118 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.118 21:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.118 21:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.118 21:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.118 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:42.118 { 00:15:42.118 "cntlid": 49, 00:15:42.118 "qid": 0, 00:15:42.118 "state": "enabled", 00:15:42.118 "thread": "nvmf_tgt_poll_group_000", 00:15:42.118 "listen_address": { 00:15:42.118 "trtype": "TCP", 00:15:42.118 "adrfam": "IPv4", 00:15:42.118 "traddr": "10.0.0.2", 00:15:42.118 "trsvcid": "4420" 00:15:42.118 }, 00:15:42.118 "peer_address": { 00:15:42.118 "trtype": "TCP", 00:15:42.118 "adrfam": "IPv4", 00:15:42.118 "traddr": "10.0.0.1", 00:15:42.118 "trsvcid": "48094" 00:15:42.118 }, 00:15:42.118 "auth": { 00:15:42.118 "state": "completed", 00:15:42.118 "digest": "sha384", 00:15:42.118 "dhgroup": "null" 00:15:42.118 } 00:15:42.118 } 00:15:42.118 ]' 00:15:42.118 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:42.118 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.118 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:42.118 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:42.118 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:42.118 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.118 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.118 21:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.376 21:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MWRlYjQwYzQ5NGFlZDdiMGY0NDNiYzcyMmFlYTYxZmZiYWQ4NDIwNjVmNDQxYmI2HlkFWQ==: --dhchap-ctrl-secret DHHC-1:03:NTEzZDQ3YzNkMzM3M2RlY2VlMDllNTQ3YmFjODliN2NkMWQ1OTE4NDY1NjM4MGM0ZjJjZGVmOTA4ODQyNTg4YuIE/eU=: 00:15:43.755 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.755 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:43.755 21:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.755 21:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.755 21:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.755 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.755 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:43.755 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:43.755 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:15:43.755 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.755 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:43.755 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:43.755 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:43.755 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.755 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.755 21:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.755 21:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.755 21:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.755 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.755 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.331 00:15:44.331 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:44.331 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.331 21:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:44.331 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.331 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.331 21:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.331 21:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.331 21:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.331 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:44.331 { 00:15:44.331 "cntlid": 51, 00:15:44.331 "qid": 0, 00:15:44.331 "state": "enabled", 00:15:44.331 "thread": "nvmf_tgt_poll_group_000", 00:15:44.331 "listen_address": { 00:15:44.331 "trtype": "TCP", 00:15:44.331 "adrfam": "IPv4", 00:15:44.331 "traddr": "10.0.0.2", 00:15:44.331 "trsvcid": "4420" 00:15:44.331 }, 00:15:44.331 "peer_address": { 00:15:44.331 "trtype": "TCP", 00:15:44.331 "adrfam": "IPv4", 00:15:44.331 "traddr": "10.0.0.1", 00:15:44.331 "trsvcid": "48132" 00:15:44.331 }, 00:15:44.331 "auth": { 00:15:44.331 "state": "completed", 00:15:44.331 "digest": "sha384", 00:15:44.331 "dhgroup": "null" 00:15:44.331 } 00:15:44.331 } 00:15:44.331 ]' 00:15:44.331 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:44.667 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.667 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:44.667 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:44.667 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:44.667 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.667 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.667 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.965 21:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjRmYjU3OGNkYTlhZDJlOGRiOGJlMjZkZTQ5NzhhMmbYGZ25: --dhchap-ctrl-secret DHHC-1:02:Nzc2NzdkMmY2ZDQ4ODBjMWE1OWIyNmJjMWIxYjRkZWIyMTgyMzY2YTcxZjc1MjYx6f8kzw==: 00:15:45.929 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.929 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:45.929 21:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.929 21:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.929 21:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.929 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.929 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:45.929 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:46.186 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:15:46.186 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.186 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:46.186 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:46.186 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:46.186 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.186 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.186 21:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.186 21:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.186 21:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.186 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.186 21:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.444 00:15:46.444 21:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:46.444 21:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:46.444 21:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.011 21:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.011 21:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.011 21:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.011 21:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.011 21:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.011 21:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.011 { 00:15:47.011 "cntlid": 53, 00:15:47.011 "qid": 0, 00:15:47.011 "state": "enabled", 00:15:47.011 "thread": "nvmf_tgt_poll_group_000", 00:15:47.011 "listen_address": { 00:15:47.011 "trtype": "TCP", 00:15:47.011 "adrfam": "IPv4", 00:15:47.011 "traddr": "10.0.0.2", 00:15:47.011 "trsvcid": "4420" 00:15:47.011 }, 00:15:47.011 "peer_address": { 00:15:47.011 "trtype": "TCP", 00:15:47.011 "adrfam": "IPv4", 00:15:47.011 "traddr": "10.0.0.1", 00:15:47.011 "trsvcid": "44534" 00:15:47.011 }, 00:15:47.011 "auth": { 00:15:47.011 "state": "completed", 00:15:47.011 "digest": "sha384", 00:15:47.011 "dhgroup": "null" 00:15:47.011 } 00:15:47.011 } 00:15:47.011 ]' 00:15:47.011 21:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.011 21:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.011 21:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.011 21:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:47.011 21:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.011 21:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.011 21:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.011 21:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.269 21:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZGNmYWIwNzczNmM5NGIyN2YwN2FmZjFlZmUwMDgzZjE5NGEwMDg3YWUwMWExOTNkGB+0Uw==: --dhchap-ctrl-secret DHHC-1:01:YTI5ZGYwYmM1M2QxODI2NzYyOWNmNjU0Yjk1OWIzYjWP0gKF: 00:15:48.202 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.202 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:48.202 21:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.202 21:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.202 21:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.202 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:48.202 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:48.202 21:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:48.460 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:48.460 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.460 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:48.460 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:48.460 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:48.460 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.460 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:15:48.460 21:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.460 21:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.460 21:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.460 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:48.460 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:48.718 00:15:48.975 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.975 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:48.975 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.234 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.234 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.234 21:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.234 21:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.234 21:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.234 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.234 { 00:15:49.234 "cntlid": 55, 00:15:49.234 "qid": 0, 00:15:49.234 "state": "enabled", 00:15:49.234 "thread": "nvmf_tgt_poll_group_000", 00:15:49.234 "listen_address": { 00:15:49.234 "trtype": "TCP", 00:15:49.234 "adrfam": "IPv4", 00:15:49.234 "traddr": "10.0.0.2", 00:15:49.234 "trsvcid": "4420" 00:15:49.234 }, 00:15:49.234 "peer_address": { 00:15:49.234 "trtype": "TCP", 00:15:49.234 "adrfam": "IPv4", 00:15:49.234 "traddr": "10.0.0.1", 00:15:49.234 "trsvcid": "44564" 00:15:49.234 }, 00:15:49.234 "auth": { 00:15:49.234 "state": "completed", 00:15:49.234 "digest": "sha384", 00:15:49.234 "dhgroup": "null" 00:15:49.234 } 00:15:49.234 } 00:15:49.234 ]' 00:15:49.234 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.234 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.234 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.234 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:49.234 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.234 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.234 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.234 21:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.492 21:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MWRkOTdmZTM4OWJhZThlNzMyNWJjNGYwMTZkZjg4MDI4MDI4ODU1MzY2MTM4MGEyYjlmMTE0Y2M2N2MzMTYzOBm7Qq0=: 00:15:50.870 21:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.870 21:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:50.870 21:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.870 21:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.870 21:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.870 21:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:50.870 21:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.870 21:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:50.870 21:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:50.870 21:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:50.870 21:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.870 21:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:50.870 21:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:50.870 21:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:50.870 21:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.870 21:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.870 21:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.870 21:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.870 21:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.870 21:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.870 21:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.127 00:15:51.127 21:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:51.127 21:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:51.127 21:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.384 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.384 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.384 21:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.384 21:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.384 21:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.384 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.384 { 00:15:51.384 "cntlid": 57, 00:15:51.384 "qid": 0, 00:15:51.384 "state": "enabled", 00:15:51.384 "thread": "nvmf_tgt_poll_group_000", 00:15:51.384 "listen_address": { 00:15:51.384 "trtype": "TCP", 00:15:51.384 "adrfam": "IPv4", 00:15:51.384 "traddr": "10.0.0.2", 00:15:51.384 "trsvcid": "4420" 00:15:51.384 }, 00:15:51.384 "peer_address": { 00:15:51.384 "trtype": "TCP", 00:15:51.384 "adrfam": "IPv4", 00:15:51.384 "traddr": "10.0.0.1", 00:15:51.384 "trsvcid": "44596" 00:15:51.384 }, 00:15:51.384 "auth": { 00:15:51.384 "state": "completed", 00:15:51.384 "digest": "sha384", 00:15:51.384 "dhgroup": "ffdhe2048" 00:15:51.384 } 00:15:51.384 } 00:15:51.384 ]' 00:15:51.384 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:51.384 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.384 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.640 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:51.640 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.640 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.640 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.640 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.896 21:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MWRlYjQwYzQ5NGFlZDdiMGY0NDNiYzcyMmFlYTYxZmZiYWQ4NDIwNjVmNDQxYmI2HlkFWQ==: --dhchap-ctrl-secret DHHC-1:03:NTEzZDQ3YzNkMzM3M2RlY2VlMDllNTQ3YmFjODliN2NkMWQ1OTE4NDY1NjM4MGM0ZjJjZGVmOTA4ODQyNTg4YuIE/eU=: 00:15:52.825 21:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.825 21:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:52.825 21:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.825 21:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.825 21:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.825 21:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.825 21:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:52.825 21:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:53.082 21:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:53.082 21:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.082 21:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:53.082 21:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:53.082 21:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:53.082 21:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.082 21:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.082 21:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.082 21:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.082 21:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.082 21:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.082 21:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.646 00:15:53.646 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.646 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.646 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.904 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.904 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.904 21:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.904 21:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.904 21:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.904 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.904 { 00:15:53.904 "cntlid": 59, 00:15:53.904 "qid": 0, 00:15:53.904 "state": "enabled", 00:15:53.904 "thread": "nvmf_tgt_poll_group_000", 00:15:53.904 "listen_address": { 00:15:53.904 "trtype": "TCP", 00:15:53.904 "adrfam": "IPv4", 00:15:53.904 "traddr": "10.0.0.2", 00:15:53.904 "trsvcid": "4420" 00:15:53.904 }, 00:15:53.904 "peer_address": { 00:15:53.904 "trtype": "TCP", 00:15:53.904 "adrfam": "IPv4", 00:15:53.904 "traddr": "10.0.0.1", 00:15:53.904 "trsvcid": "44626" 00:15:53.904 }, 00:15:53.904 "auth": { 00:15:53.904 "state": "completed", 00:15:53.904 "digest": "sha384", 00:15:53.904 "dhgroup": "ffdhe2048" 00:15:53.904 } 00:15:53.904 } 00:15:53.904 ]' 00:15:53.904 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.904 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.904 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:53.904 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:53.904 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:53.904 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.904 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.904 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.162 21:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjRmYjU3OGNkYTlhZDJlOGRiOGJlMjZkZTQ5NzhhMmbYGZ25: --dhchap-ctrl-secret DHHC-1:02:Nzc2NzdkMmY2ZDQ4ODBjMWE1OWIyNmJjMWIxYjRkZWIyMTgyMzY2YTcxZjc1MjYx6f8kzw==: 00:15:55.534 21:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.534 21:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:55.534 21:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.534 21:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.534 21:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.534 21:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.534 21:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:55.534 21:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:55.534 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:55.534 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.534 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:55.534 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:55.534 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:55.534 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.534 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.534 21:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.534 21:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.534 21:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.534 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.534 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.099 00:15:56.100 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:56.100 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:56.100 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.357 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.357 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.357 21:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.357 21:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.357 21:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.357 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:56.357 { 00:15:56.357 "cntlid": 61, 00:15:56.357 "qid": 0, 00:15:56.357 "state": "enabled", 00:15:56.357 "thread": "nvmf_tgt_poll_group_000", 00:15:56.357 "listen_address": { 00:15:56.357 "trtype": "TCP", 00:15:56.357 "adrfam": "IPv4", 00:15:56.357 "traddr": "10.0.0.2", 00:15:56.357 "trsvcid": "4420" 00:15:56.357 }, 00:15:56.357 "peer_address": { 00:15:56.357 "trtype": "TCP", 00:15:56.357 "adrfam": "IPv4", 00:15:56.357 "traddr": "10.0.0.1", 00:15:56.357 "trsvcid": "60512" 00:15:56.357 }, 00:15:56.357 "auth": { 00:15:56.358 "state": "completed", 00:15:56.358 "digest": "sha384", 00:15:56.358 "dhgroup": "ffdhe2048" 00:15:56.358 } 00:15:56.358 } 00:15:56.358 ]' 00:15:56.358 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:56.358 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.358 21:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:56.358 21:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:56.358 21:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:56.358 21:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.358 21:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.358 21:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.616 21:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZGNmYWIwNzczNmM5NGIyN2YwN2FmZjFlZmUwMDgzZjE5NGEwMDg3YWUwMWExOTNkGB+0Uw==: --dhchap-ctrl-secret DHHC-1:01:YTI5ZGYwYmM1M2QxODI2NzYyOWNmNjU0Yjk1OWIzYjWP0gKF: 00:15:58.001 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.001 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:58.001 21:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.001 21:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.001 21:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.001 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.001 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:58.001 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:58.001 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:58.001 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.001 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:58.001 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:58.001 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:58.001 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.001 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:15:58.001 21:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.001 21:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.001 21:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.001 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:58.001 21:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:58.258 00:15:58.516 21:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.516 21:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.516 21:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.516 21:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.516 21:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.516 21:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.516 21:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.774 21:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.774 21:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.774 { 00:15:58.774 "cntlid": 63, 00:15:58.774 "qid": 0, 00:15:58.774 "state": "enabled", 00:15:58.774 "thread": "nvmf_tgt_poll_group_000", 00:15:58.774 "listen_address": { 00:15:58.774 "trtype": "TCP", 00:15:58.774 "adrfam": "IPv4", 00:15:58.774 "traddr": "10.0.0.2", 00:15:58.774 "trsvcid": "4420" 00:15:58.774 }, 00:15:58.774 "peer_address": { 00:15:58.774 "trtype": "TCP", 00:15:58.774 "adrfam": "IPv4", 00:15:58.774 "traddr": "10.0.0.1", 00:15:58.774 "trsvcid": "60544" 00:15:58.774 }, 00:15:58.774 "auth": { 00:15:58.774 "state": "completed", 00:15:58.774 "digest": "sha384", 00:15:58.774 "dhgroup": "ffdhe2048" 00:15:58.774 } 00:15:58.774 } 00:15:58.774 ]' 00:15:58.774 21:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.774 21:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.774 21:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.774 21:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:58.774 21:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.774 21:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.774 21:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.774 21:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.032 21:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MWRkOTdmZTM4OWJhZThlNzMyNWJjNGYwMTZkZjg4MDI4MDI4ODU1MzY2MTM4MGEyYjlmMTE0Y2M2N2MzMTYzOBm7Qq0=: 00:16:00.405 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.405 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:00.405 21:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.405 21:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.405 21:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.405 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:00.405 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.405 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:00.405 21:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:00.405 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:16:00.405 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.405 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:00.405 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:00.405 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:00.405 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.405 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.405 21:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.405 21:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.405 21:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.405 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.405 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.970 00:16:00.970 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.970 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.970 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.228 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.228 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.228 21:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.228 21:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.228 21:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.228 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.228 { 00:16:01.228 "cntlid": 65, 00:16:01.228 "qid": 0, 00:16:01.228 "state": "enabled", 00:16:01.228 "thread": "nvmf_tgt_poll_group_000", 00:16:01.228 "listen_address": { 00:16:01.228 "trtype": "TCP", 00:16:01.228 "adrfam": "IPv4", 00:16:01.228 "traddr": "10.0.0.2", 00:16:01.228 "trsvcid": "4420" 00:16:01.228 }, 00:16:01.228 "peer_address": { 00:16:01.228 "trtype": "TCP", 00:16:01.228 "adrfam": "IPv4", 00:16:01.228 "traddr": "10.0.0.1", 00:16:01.228 "trsvcid": "60570" 00:16:01.228 }, 00:16:01.228 "auth": { 00:16:01.228 "state": "completed", 00:16:01.228 "digest": "sha384", 00:16:01.228 "dhgroup": "ffdhe3072" 00:16:01.228 } 00:16:01.228 } 00:16:01.228 ]' 00:16:01.228 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.228 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.228 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.228 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:01.228 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.228 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.228 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.228 21:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.486 21:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MWRlYjQwYzQ5NGFlZDdiMGY0NDNiYzcyMmFlYTYxZmZiYWQ4NDIwNjVmNDQxYmI2HlkFWQ==: --dhchap-ctrl-secret DHHC-1:03:NTEzZDQ3YzNkMzM3M2RlY2VlMDllNTQ3YmFjODliN2NkMWQ1OTE4NDY1NjM4MGM0ZjJjZGVmOTA4ODQyNTg4YuIE/eU=: 00:16:02.858 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.858 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:02.858 21:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.858 21:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.858 21:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.858 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.858 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:02.858 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:02.858 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:16:02.858 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.858 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:02.858 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:02.858 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:02.858 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.858 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.858 21:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.858 21:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.858 21:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.858 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.858 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.419 00:16:03.419 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.419 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.419 21:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.674 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.674 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.674 21:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.674 21:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.674 21:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.674 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.674 { 00:16:03.674 "cntlid": 67, 00:16:03.674 "qid": 0, 00:16:03.674 "state": "enabled", 00:16:03.674 "thread": "nvmf_tgt_poll_group_000", 00:16:03.674 "listen_address": { 00:16:03.674 "trtype": "TCP", 00:16:03.674 "adrfam": "IPv4", 00:16:03.674 "traddr": "10.0.0.2", 00:16:03.674 "trsvcid": "4420" 00:16:03.674 }, 00:16:03.674 "peer_address": { 00:16:03.674 "trtype": "TCP", 00:16:03.674 "adrfam": "IPv4", 00:16:03.674 "traddr": "10.0.0.1", 00:16:03.674 "trsvcid": "60594" 00:16:03.674 }, 00:16:03.674 "auth": { 00:16:03.674 "state": "completed", 00:16:03.675 "digest": "sha384", 00:16:03.675 "dhgroup": "ffdhe3072" 00:16:03.675 } 00:16:03.675 } 00:16:03.675 ]' 00:16:03.675 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.675 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.675 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.675 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:03.675 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.675 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.675 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.675 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.930 21:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjRmYjU3OGNkYTlhZDJlOGRiOGJlMjZkZTQ5NzhhMmbYGZ25: --dhchap-ctrl-secret DHHC-1:02:Nzc2NzdkMmY2ZDQ4ODBjMWE1OWIyNmJjMWIxYjRkZWIyMTgyMzY2YTcxZjc1MjYx6f8kzw==: 00:16:05.298 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.299 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:05.299 21:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.299 21:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.299 21:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.299 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.299 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:05.299 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:05.299 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:16:05.299 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.299 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:05.299 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:05.299 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:05.299 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.299 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.299 21:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.299 21:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.299 21:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.299 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.299 21:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.863 00:16:05.863 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:05.863 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:05.863 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.119 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.119 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.119 21:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.119 21:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.119 21:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.119 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.119 { 00:16:06.119 "cntlid": 69, 00:16:06.119 "qid": 0, 00:16:06.119 "state": "enabled", 00:16:06.119 "thread": "nvmf_tgt_poll_group_000", 00:16:06.119 "listen_address": { 00:16:06.119 "trtype": "TCP", 00:16:06.119 "adrfam": "IPv4", 00:16:06.119 "traddr": "10.0.0.2", 00:16:06.119 "trsvcid": "4420" 00:16:06.119 }, 00:16:06.119 "peer_address": { 00:16:06.119 "trtype": "TCP", 00:16:06.119 "adrfam": "IPv4", 00:16:06.119 "traddr": "10.0.0.1", 00:16:06.119 "trsvcid": "38646" 00:16:06.119 }, 00:16:06.119 "auth": { 00:16:06.119 "state": "completed", 00:16:06.119 "digest": "sha384", 00:16:06.119 "dhgroup": "ffdhe3072" 00:16:06.119 } 00:16:06.119 } 00:16:06.119 ]' 00:16:06.119 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.119 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.119 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.119 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:06.120 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.120 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.120 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.120 21:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.377 21:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZGNmYWIwNzczNmM5NGIyN2YwN2FmZjFlZmUwMDgzZjE5NGEwMDg3YWUwMWExOTNkGB+0Uw==: --dhchap-ctrl-secret DHHC-1:01:YTI5ZGYwYmM1M2QxODI2NzYyOWNmNjU0Yjk1OWIzYjWP0gKF: 00:16:07.308 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.564 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:07.564 21:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.564 21:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.564 21:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.564 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.564 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:07.564 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:07.821 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:16:07.821 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.821 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:07.821 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:07.821 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:07.821 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.821 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:16:07.821 21:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.821 21:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.821 21:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.821 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:07.821 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:08.077 00:16:08.078 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.078 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.078 21:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.335 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.335 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.335 21:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.335 21:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.335 21:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.335 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.335 { 00:16:08.335 "cntlid": 71, 00:16:08.335 "qid": 0, 00:16:08.335 "state": "enabled", 00:16:08.335 "thread": "nvmf_tgt_poll_group_000", 00:16:08.335 "listen_address": { 00:16:08.335 "trtype": "TCP", 00:16:08.335 "adrfam": "IPv4", 00:16:08.335 "traddr": "10.0.0.2", 00:16:08.335 "trsvcid": "4420" 00:16:08.335 }, 00:16:08.335 "peer_address": { 00:16:08.335 "trtype": "TCP", 00:16:08.335 "adrfam": "IPv4", 00:16:08.335 "traddr": "10.0.0.1", 00:16:08.335 "trsvcid": "38672" 00:16:08.335 }, 00:16:08.335 "auth": { 00:16:08.335 "state": "completed", 00:16:08.335 "digest": "sha384", 00:16:08.335 "dhgroup": "ffdhe3072" 00:16:08.335 } 00:16:08.335 } 00:16:08.335 ]' 00:16:08.335 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.335 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:08.335 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.591 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:08.591 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.591 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.591 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.591 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.875 21:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MWRkOTdmZTM4OWJhZThlNzMyNWJjNGYwMTZkZjg4MDI4MDI4ODU1MzY2MTM4MGEyYjlmMTE0Y2M2N2MzMTYzOBm7Qq0=: 00:16:09.801 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.801 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:09.801 21:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.801 21:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.801 21:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.801 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:09.801 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.801 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:09.801 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:10.057 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:16:10.057 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.057 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:10.057 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:10.057 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:10.057 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.057 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.057 21:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.057 21:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.057 21:38:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.057 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.057 21:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.624 00:16:10.624 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.624 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.624 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.881 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.881 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.881 21:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.881 21:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.881 21:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.881 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.881 { 00:16:10.881 "cntlid": 73, 00:16:10.881 "qid": 0, 00:16:10.881 "state": "enabled", 00:16:10.881 "thread": "nvmf_tgt_poll_group_000", 00:16:10.881 "listen_address": { 00:16:10.881 "trtype": "TCP", 00:16:10.881 "adrfam": "IPv4", 00:16:10.881 "traddr": "10.0.0.2", 00:16:10.881 "trsvcid": "4420" 00:16:10.881 }, 00:16:10.881 "peer_address": { 00:16:10.881 "trtype": "TCP", 00:16:10.881 "adrfam": "IPv4", 00:16:10.881 "traddr": "10.0.0.1", 00:16:10.881 "trsvcid": "38688" 00:16:10.881 }, 00:16:10.881 "auth": { 00:16:10.881 "state": "completed", 00:16:10.881 "digest": "sha384", 00:16:10.881 "dhgroup": "ffdhe4096" 00:16:10.881 } 00:16:10.881 } 00:16:10.881 ]' 00:16:10.881 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.881 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.881 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.881 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:10.881 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.881 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.881 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.881 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.137 21:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MWRlYjQwYzQ5NGFlZDdiMGY0NDNiYzcyMmFlYTYxZmZiYWQ4NDIwNjVmNDQxYmI2HlkFWQ==: --dhchap-ctrl-secret DHHC-1:03:NTEzZDQ3YzNkMzM3M2RlY2VlMDllNTQ3YmFjODliN2NkMWQ1OTE4NDY1NjM4MGM0ZjJjZGVmOTA4ODQyNTg4YuIE/eU=: 00:16:12.504 21:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.504 21:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:12.504 21:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.504 21:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.504 21:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.504 21:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.504 21:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:12.504 21:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:12.504 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:16:12.504 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.504 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:12.504 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:12.504 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:12.504 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.504 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.504 21:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.504 21:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.504 21:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.505 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.505 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.066 00:16:13.066 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.066 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.066 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.323 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.323 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.323 21:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.323 21:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.323 21:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.323 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.323 { 00:16:13.323 "cntlid": 75, 00:16:13.323 "qid": 0, 00:16:13.323 "state": "enabled", 00:16:13.323 "thread": "nvmf_tgt_poll_group_000", 00:16:13.323 "listen_address": { 00:16:13.323 "trtype": "TCP", 00:16:13.323 "adrfam": "IPv4", 00:16:13.323 "traddr": "10.0.0.2", 00:16:13.323 "trsvcid": "4420" 00:16:13.323 }, 00:16:13.323 "peer_address": { 00:16:13.323 "trtype": "TCP", 00:16:13.323 "adrfam": "IPv4", 00:16:13.323 "traddr": "10.0.0.1", 00:16:13.323 "trsvcid": "38712" 00:16:13.323 }, 00:16:13.323 "auth": { 00:16:13.323 "state": "completed", 00:16:13.323 "digest": "sha384", 00:16:13.323 "dhgroup": "ffdhe4096" 00:16:13.323 } 00:16:13.323 } 00:16:13.323 ]' 00:16:13.323 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.323 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:13.323 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.323 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:13.323 21:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.323 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.323 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.323 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.580 21:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjRmYjU3OGNkYTlhZDJlOGRiOGJlMjZkZTQ5NzhhMmbYGZ25: --dhchap-ctrl-secret DHHC-1:02:Nzc2NzdkMmY2ZDQ4ODBjMWE1OWIyNmJjMWIxYjRkZWIyMTgyMzY2YTcxZjc1MjYx6f8kzw==: 00:16:14.512 21:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.512 21:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:14.512 21:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.512 21:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.512 21:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.512 21:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:14.512 21:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:14.512 21:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:15.078 21:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:16:15.078 21:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.078 21:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:15.078 21:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:15.078 21:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:15.078 21:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.078 21:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.078 21:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.078 21:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.078 21:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.078 21:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.078 21:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.336 00:16:15.336 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.336 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.336 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.592 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.592 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.592 21:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.592 21:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.592 21:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.592 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.592 { 00:16:15.592 "cntlid": 77, 00:16:15.592 "qid": 0, 00:16:15.592 "state": "enabled", 00:16:15.592 "thread": "nvmf_tgt_poll_group_000", 00:16:15.592 "listen_address": { 00:16:15.592 "trtype": "TCP", 00:16:15.592 "adrfam": "IPv4", 00:16:15.592 "traddr": "10.0.0.2", 00:16:15.592 "trsvcid": "4420" 00:16:15.592 }, 00:16:15.592 "peer_address": { 00:16:15.592 "trtype": "TCP", 00:16:15.592 "adrfam": "IPv4", 00:16:15.592 "traddr": "10.0.0.1", 00:16:15.592 "trsvcid": "60104" 00:16:15.592 }, 00:16:15.592 "auth": { 00:16:15.592 "state": "completed", 00:16:15.592 "digest": "sha384", 00:16:15.592 "dhgroup": "ffdhe4096" 00:16:15.592 } 00:16:15.592 } 00:16:15.592 ]' 00:16:15.592 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.592 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.592 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.592 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:15.593 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.593 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.593 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.593 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.157 21:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZGNmYWIwNzczNmM5NGIyN2YwN2FmZjFlZmUwMDgzZjE5NGEwMDg3YWUwMWExOTNkGB+0Uw==: --dhchap-ctrl-secret DHHC-1:01:YTI5ZGYwYmM1M2QxODI2NzYyOWNmNjU0Yjk1OWIzYjWP0gKF: 00:16:17.090 21:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.090 21:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:17.090 21:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.090 21:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.090 21:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.090 21:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.090 21:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:17.090 21:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:17.348 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:16:17.348 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.348 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:17.348 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:17.348 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:17.348 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.348 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:16:17.348 21:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.348 21:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.348 21:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.348 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:17.348 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:17.913 00:16:17.913 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.913 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.913 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.171 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.171 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.171 21:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.171 21:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.171 21:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.171 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.171 { 00:16:18.171 "cntlid": 79, 00:16:18.171 "qid": 0, 00:16:18.171 "state": "enabled", 00:16:18.171 "thread": "nvmf_tgt_poll_group_000", 00:16:18.171 "listen_address": { 00:16:18.171 "trtype": "TCP", 00:16:18.171 "adrfam": "IPv4", 00:16:18.171 "traddr": "10.0.0.2", 00:16:18.171 "trsvcid": "4420" 00:16:18.171 }, 00:16:18.171 "peer_address": { 00:16:18.171 "trtype": "TCP", 00:16:18.171 "adrfam": "IPv4", 00:16:18.171 "traddr": "10.0.0.1", 00:16:18.171 "trsvcid": "60134" 00:16:18.171 }, 00:16:18.171 "auth": { 00:16:18.171 "state": "completed", 00:16:18.171 "digest": "sha384", 00:16:18.171 "dhgroup": "ffdhe4096" 00:16:18.171 } 00:16:18.171 } 00:16:18.171 ]' 00:16:18.171 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.171 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.171 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.171 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:18.171 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:18.171 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.171 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.171 21:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.430 21:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MWRkOTdmZTM4OWJhZThlNzMyNWJjNGYwMTZkZjg4MDI4MDI4ODU1MzY2MTM4MGEyYjlmMTE0Y2M2N2MzMTYzOBm7Qq0=: 00:16:19.808 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.808 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:19.808 21:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.808 21:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.808 21:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.808 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.808 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.808 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:19.808 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:19.808 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:16:19.808 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.808 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:19.808 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:19.808 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:19.808 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.808 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.808 21:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.808 21:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.808 21:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.808 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.808 21:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.377 00:16:20.377 21:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.377 21:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.377 21:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.636 21:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.636 21:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.636 21:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.636 21:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.636 21:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.636 21:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.636 { 00:16:20.636 "cntlid": 81, 00:16:20.636 "qid": 0, 00:16:20.636 "state": "enabled", 00:16:20.636 "thread": "nvmf_tgt_poll_group_000", 00:16:20.636 "listen_address": { 00:16:20.636 "trtype": "TCP", 00:16:20.636 "adrfam": "IPv4", 00:16:20.636 "traddr": "10.0.0.2", 00:16:20.636 "trsvcid": "4420" 00:16:20.636 }, 00:16:20.636 "peer_address": { 00:16:20.636 "trtype": "TCP", 00:16:20.636 "adrfam": "IPv4", 00:16:20.636 "traddr": "10.0.0.1", 00:16:20.636 "trsvcid": "60172" 00:16:20.636 }, 00:16:20.636 "auth": { 00:16:20.636 "state": "completed", 00:16:20.636 "digest": "sha384", 00:16:20.636 "dhgroup": "ffdhe6144" 00:16:20.636 } 00:16:20.636 } 00:16:20.636 ]' 00:16:20.636 21:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.894 21:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.895 21:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.895 21:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:20.895 21:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:20.895 21:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.895 21:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.895 21:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.153 21:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MWRlYjQwYzQ5NGFlZDdiMGY0NDNiYzcyMmFlYTYxZmZiYWQ4NDIwNjVmNDQxYmI2HlkFWQ==: --dhchap-ctrl-secret DHHC-1:03:NTEzZDQ3YzNkMzM3M2RlY2VlMDllNTQ3YmFjODliN2NkMWQ1OTE4NDY1NjM4MGM0ZjJjZGVmOTA4ODQyNTg4YuIE/eU=: 00:16:22.092 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.092 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:22.092 21:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.092 21:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.350 21:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.350 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.350 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:22.350 21:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:22.609 21:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:16:22.609 21:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.609 21:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:22.609 21:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:22.609 21:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:22.609 21:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.609 21:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.609 21:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.609 21:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.609 21:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.609 21:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.610 21:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.176 00:16:23.176 21:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.176 21:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.176 21:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.434 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.434 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.434 21:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.434 21:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.434 21:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.434 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.434 { 00:16:23.434 "cntlid": 83, 00:16:23.434 "qid": 0, 00:16:23.434 "state": "enabled", 00:16:23.434 "thread": "nvmf_tgt_poll_group_000", 00:16:23.434 "listen_address": { 00:16:23.434 "trtype": "TCP", 00:16:23.434 "adrfam": "IPv4", 00:16:23.434 "traddr": "10.0.0.2", 00:16:23.434 "trsvcid": "4420" 00:16:23.434 }, 00:16:23.434 "peer_address": { 00:16:23.434 "trtype": "TCP", 00:16:23.434 "adrfam": "IPv4", 00:16:23.434 "traddr": "10.0.0.1", 00:16:23.434 "trsvcid": "60202" 00:16:23.434 }, 00:16:23.434 "auth": { 00:16:23.434 "state": "completed", 00:16:23.434 "digest": "sha384", 00:16:23.434 "dhgroup": "ffdhe6144" 00:16:23.434 } 00:16:23.434 } 00:16:23.434 ]' 00:16:23.434 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.434 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:23.434 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.434 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:23.434 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.434 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.434 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.434 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.003 21:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjRmYjU3OGNkYTlhZDJlOGRiOGJlMjZkZTQ5NzhhMmbYGZ25: --dhchap-ctrl-secret DHHC-1:02:Nzc2NzdkMmY2ZDQ4ODBjMWE1OWIyNmJjMWIxYjRkZWIyMTgyMzY2YTcxZjc1MjYx6f8kzw==: 00:16:24.943 21:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.943 21:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:24.943 21:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.943 21:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.943 21:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.943 21:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.943 21:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:24.943 21:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:25.200 21:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:16:25.200 21:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.200 21:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:25.200 21:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:25.200 21:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:25.200 21:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.200 21:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.200 21:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.200 21:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.200 21:38:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.200 21:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.200 21:38:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.766 00:16:25.766 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.766 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.766 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.025 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.025 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.025 21:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.025 21:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.025 21:38:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.025 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.025 { 00:16:26.025 "cntlid": 85, 00:16:26.025 "qid": 0, 00:16:26.025 "state": "enabled", 00:16:26.025 "thread": "nvmf_tgt_poll_group_000", 00:16:26.025 "listen_address": { 00:16:26.025 "trtype": "TCP", 00:16:26.025 "adrfam": "IPv4", 00:16:26.025 "traddr": "10.0.0.2", 00:16:26.025 "trsvcid": "4420" 00:16:26.025 }, 00:16:26.025 "peer_address": { 00:16:26.025 "trtype": "TCP", 00:16:26.025 "adrfam": "IPv4", 00:16:26.025 "traddr": "10.0.0.1", 00:16:26.025 "trsvcid": "44242" 00:16:26.025 }, 00:16:26.025 "auth": { 00:16:26.025 "state": "completed", 00:16:26.025 "digest": "sha384", 00:16:26.025 "dhgroup": "ffdhe6144" 00:16:26.025 } 00:16:26.025 } 00:16:26.025 ]' 00:16:26.025 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.025 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.025 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.025 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:26.025 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.283 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.283 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.283 21:38:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.543 21:38:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZGNmYWIwNzczNmM5NGIyN2YwN2FmZjFlZmUwMDgzZjE5NGEwMDg3YWUwMWExOTNkGB+0Uw==: --dhchap-ctrl-secret DHHC-1:01:YTI5ZGYwYmM1M2QxODI2NzYyOWNmNjU0Yjk1OWIzYjWP0gKF: 00:16:27.481 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.481 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:27.481 21:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.481 21:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.481 21:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.481 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.482 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:27.482 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:27.740 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:16:27.740 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.740 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:27.740 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:27.740 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:27.740 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.740 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:16:27.740 21:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.740 21:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.740 21:38:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.740 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:27.740 21:38:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:28.307 00:16:28.307 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.307 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.307 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.565 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.565 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.565 21:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.565 21:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.565 21:38:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.565 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.565 { 00:16:28.565 "cntlid": 87, 00:16:28.565 "qid": 0, 00:16:28.565 "state": "enabled", 00:16:28.565 "thread": "nvmf_tgt_poll_group_000", 00:16:28.565 "listen_address": { 00:16:28.565 "trtype": "TCP", 00:16:28.565 "adrfam": "IPv4", 00:16:28.565 "traddr": "10.0.0.2", 00:16:28.565 "trsvcid": "4420" 00:16:28.565 }, 00:16:28.565 "peer_address": { 00:16:28.565 "trtype": "TCP", 00:16:28.565 "adrfam": "IPv4", 00:16:28.565 "traddr": "10.0.0.1", 00:16:28.565 "trsvcid": "44264" 00:16:28.565 }, 00:16:28.565 "auth": { 00:16:28.565 "state": "completed", 00:16:28.565 "digest": "sha384", 00:16:28.565 "dhgroup": "ffdhe6144" 00:16:28.565 } 00:16:28.565 } 00:16:28.565 ]' 00:16:28.565 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.823 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.824 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.824 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:28.824 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.824 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.824 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.824 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.082 21:38:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MWRkOTdmZTM4OWJhZThlNzMyNWJjNGYwMTZkZjg4MDI4MDI4ODU1MzY2MTM4MGEyYjlmMTE0Y2M2N2MzMTYzOBm7Qq0=: 00:16:30.019 21:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.019 21:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:30.019 21:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.019 21:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.278 21:38:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.278 21:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.278 21:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.278 21:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:30.278 21:38:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:30.537 21:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:16:30.537 21:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.537 21:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:30.537 21:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:30.537 21:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:30.537 21:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.537 21:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.537 21:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.537 21:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.537 21:38:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.537 21:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.537 21:38:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.473 00:16:31.473 21:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.473 21:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.473 21:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.731 21:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.731 21:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.731 21:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.731 21:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.731 21:38:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.731 21:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.731 { 00:16:31.731 "cntlid": 89, 00:16:31.731 "qid": 0, 00:16:31.731 "state": "enabled", 00:16:31.731 "thread": "nvmf_tgt_poll_group_000", 00:16:31.731 "listen_address": { 00:16:31.731 "trtype": "TCP", 00:16:31.731 "adrfam": "IPv4", 00:16:31.731 "traddr": "10.0.0.2", 00:16:31.731 "trsvcid": "4420" 00:16:31.731 }, 00:16:31.731 "peer_address": { 00:16:31.731 "trtype": "TCP", 00:16:31.731 "adrfam": "IPv4", 00:16:31.731 "traddr": "10.0.0.1", 00:16:31.731 "trsvcid": "44282" 00:16:31.731 }, 00:16:31.731 "auth": { 00:16:31.731 "state": "completed", 00:16:31.731 "digest": "sha384", 00:16:31.731 "dhgroup": "ffdhe8192" 00:16:31.731 } 00:16:31.731 } 00:16:31.731 ]' 00:16:31.731 21:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.731 21:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.731 21:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.731 21:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:31.731 21:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.731 21:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.731 21:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.731 21:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.988 21:38:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MWRlYjQwYzQ5NGFlZDdiMGY0NDNiYzcyMmFlYTYxZmZiYWQ4NDIwNjVmNDQxYmI2HlkFWQ==: --dhchap-ctrl-secret DHHC-1:03:NTEzZDQ3YzNkMzM3M2RlY2VlMDllNTQ3YmFjODliN2NkMWQ1OTE4NDY1NjM4MGM0ZjJjZGVmOTA4ODQyNTg4YuIE/eU=: 00:16:33.361 21:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.361 21:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:33.361 21:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.361 21:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.361 21:38:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.361 21:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.361 21:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:33.361 21:38:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:33.361 21:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:16:33.361 21:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.361 21:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:33.361 21:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:33.361 21:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:33.361 21:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.361 21:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.361 21:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.361 21:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.361 21:38:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.361 21:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.361 21:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.298 00:16:34.298 21:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.298 21:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.298 21:38:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.556 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.556 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.556 21:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.556 21:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.556 21:38:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.556 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.556 { 00:16:34.556 "cntlid": 91, 00:16:34.556 "qid": 0, 00:16:34.556 "state": "enabled", 00:16:34.556 "thread": "nvmf_tgt_poll_group_000", 00:16:34.556 "listen_address": { 00:16:34.556 "trtype": "TCP", 00:16:34.556 "adrfam": "IPv4", 00:16:34.556 "traddr": "10.0.0.2", 00:16:34.556 "trsvcid": "4420" 00:16:34.556 }, 00:16:34.556 "peer_address": { 00:16:34.556 "trtype": "TCP", 00:16:34.556 "adrfam": "IPv4", 00:16:34.556 "traddr": "10.0.0.1", 00:16:34.556 "trsvcid": "44304" 00:16:34.556 }, 00:16:34.556 "auth": { 00:16:34.556 "state": "completed", 00:16:34.556 "digest": "sha384", 00:16:34.556 "dhgroup": "ffdhe8192" 00:16:34.556 } 00:16:34.556 } 00:16:34.556 ]' 00:16:34.556 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.556 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.556 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.556 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:34.556 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.815 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.815 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.815 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.074 21:38:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjRmYjU3OGNkYTlhZDJlOGRiOGJlMjZkZTQ5NzhhMmbYGZ25: --dhchap-ctrl-secret DHHC-1:02:Nzc2NzdkMmY2ZDQ4ODBjMWE1OWIyNmJjMWIxYjRkZWIyMTgyMzY2YTcxZjc1MjYx6f8kzw==: 00:16:36.013 21:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.013 21:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:36.013 21:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.013 21:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.013 21:38:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.013 21:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.013 21:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:36.013 21:38:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:36.271 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:16:36.271 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.271 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:36.271 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:36.271 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:36.271 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.271 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.271 21:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.271 21:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.271 21:38:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.271 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.271 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.210 00:16:37.210 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.210 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.210 21:38:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.469 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.469 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.469 21:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.469 21:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.469 21:38:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.469 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.469 { 00:16:37.469 "cntlid": 93, 00:16:37.469 "qid": 0, 00:16:37.469 "state": "enabled", 00:16:37.469 "thread": "nvmf_tgt_poll_group_000", 00:16:37.469 "listen_address": { 00:16:37.469 "trtype": "TCP", 00:16:37.469 "adrfam": "IPv4", 00:16:37.469 "traddr": "10.0.0.2", 00:16:37.469 "trsvcid": "4420" 00:16:37.469 }, 00:16:37.469 "peer_address": { 00:16:37.469 "trtype": "TCP", 00:16:37.469 "adrfam": "IPv4", 00:16:37.469 "traddr": "10.0.0.1", 00:16:37.469 "trsvcid": "34262" 00:16:37.469 }, 00:16:37.469 "auth": { 00:16:37.469 "state": "completed", 00:16:37.469 "digest": "sha384", 00:16:37.469 "dhgroup": "ffdhe8192" 00:16:37.469 } 00:16:37.469 } 00:16:37.469 ]' 00:16:37.469 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.727 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.727 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.727 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:37.727 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.727 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.727 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.727 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.987 21:38:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZGNmYWIwNzczNmM5NGIyN2YwN2FmZjFlZmUwMDgzZjE5NGEwMDg3YWUwMWExOTNkGB+0Uw==: --dhchap-ctrl-secret DHHC-1:01:YTI5ZGYwYmM1M2QxODI2NzYyOWNmNjU0Yjk1OWIzYjWP0gKF: 00:16:38.927 21:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.927 21:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:38.927 21:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.927 21:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.927 21:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.927 21:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.927 21:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:38.927 21:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:39.186 21:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:16:39.186 21:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.186 21:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:39.186 21:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:39.186 21:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:39.186 21:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.186 21:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:16:39.186 21:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.186 21:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.186 21:38:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.186 21:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:39.186 21:38:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.122 00:16:40.123 21:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.123 21:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.123 21:38:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.381 21:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.381 21:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.381 21:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.381 21:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.381 21:38:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.381 21:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.381 { 00:16:40.381 "cntlid": 95, 00:16:40.381 "qid": 0, 00:16:40.381 "state": "enabled", 00:16:40.381 "thread": "nvmf_tgt_poll_group_000", 00:16:40.381 "listen_address": { 00:16:40.381 "trtype": "TCP", 00:16:40.381 "adrfam": "IPv4", 00:16:40.381 "traddr": "10.0.0.2", 00:16:40.381 "trsvcid": "4420" 00:16:40.381 }, 00:16:40.381 "peer_address": { 00:16:40.381 "trtype": "TCP", 00:16:40.381 "adrfam": "IPv4", 00:16:40.381 "traddr": "10.0.0.1", 00:16:40.381 "trsvcid": "34284" 00:16:40.381 }, 00:16:40.381 "auth": { 00:16:40.381 "state": "completed", 00:16:40.381 "digest": "sha384", 00:16:40.381 "dhgroup": "ffdhe8192" 00:16:40.381 } 00:16:40.381 } 00:16:40.381 ]' 00:16:40.381 21:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.639 21:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.639 21:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.639 21:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:40.639 21:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.639 21:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.639 21:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.639 21:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.897 21:38:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MWRkOTdmZTM4OWJhZThlNzMyNWJjNGYwMTZkZjg4MDI4MDI4ODU1MzY2MTM4MGEyYjlmMTE0Y2M2N2MzMTYzOBm7Qq0=: 00:16:41.864 21:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.864 21:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:41.864 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.864 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.864 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.864 21:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:41.864 21:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.864 21:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.864 21:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:41.864 21:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:42.150 21:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:16:42.150 21:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.150 21:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:42.150 21:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:42.150 21:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:42.150 21:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.150 21:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.150 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.150 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.150 21:38:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.150 21:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.150 21:38:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.440 00:16:42.708 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.708 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.708 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.964 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.964 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.964 21:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.964 21:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.964 21:38:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.964 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.964 { 00:16:42.964 "cntlid": 97, 00:16:42.964 "qid": 0, 00:16:42.964 "state": "enabled", 00:16:42.964 "thread": "nvmf_tgt_poll_group_000", 00:16:42.964 "listen_address": { 00:16:42.964 "trtype": "TCP", 00:16:42.964 "adrfam": "IPv4", 00:16:42.964 "traddr": "10.0.0.2", 00:16:42.964 "trsvcid": "4420" 00:16:42.964 }, 00:16:42.964 "peer_address": { 00:16:42.964 "trtype": "TCP", 00:16:42.964 "adrfam": "IPv4", 00:16:42.964 "traddr": "10.0.0.1", 00:16:42.964 "trsvcid": "34312" 00:16:42.964 }, 00:16:42.964 "auth": { 00:16:42.964 "state": "completed", 00:16:42.964 "digest": "sha512", 00:16:42.964 "dhgroup": "null" 00:16:42.964 } 00:16:42.964 } 00:16:42.964 ]' 00:16:42.964 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.964 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.964 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.964 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:42.964 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.964 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.964 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.964 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.222 21:38:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MWRlYjQwYzQ5NGFlZDdiMGY0NDNiYzcyMmFlYTYxZmZiYWQ4NDIwNjVmNDQxYmI2HlkFWQ==: --dhchap-ctrl-secret DHHC-1:03:NTEzZDQ3YzNkMzM3M2RlY2VlMDllNTQ3YmFjODliN2NkMWQ1OTE4NDY1NjM4MGM0ZjJjZGVmOTA4ODQyNTg4YuIE/eU=: 00:16:44.600 21:38:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.600 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:44.600 21:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.600 21:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.600 21:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.600 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.600 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:44.600 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:44.600 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:16:44.600 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.600 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:44.600 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:44.600 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:44.600 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.600 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.600 21:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.600 21:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.600 21:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.600 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.600 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.165 00:16:45.165 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.165 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.165 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.165 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.165 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.165 21:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.165 21:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.165 21:38:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.166 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.166 { 00:16:45.166 "cntlid": 99, 00:16:45.166 "qid": 0, 00:16:45.166 "state": "enabled", 00:16:45.166 "thread": "nvmf_tgt_poll_group_000", 00:16:45.166 "listen_address": { 00:16:45.166 "trtype": "TCP", 00:16:45.166 "adrfam": "IPv4", 00:16:45.166 "traddr": "10.0.0.2", 00:16:45.166 "trsvcid": "4420" 00:16:45.166 }, 00:16:45.166 "peer_address": { 00:16:45.166 "trtype": "TCP", 00:16:45.166 "adrfam": "IPv4", 00:16:45.166 "traddr": "10.0.0.1", 00:16:45.166 "trsvcid": "34332" 00:16:45.166 }, 00:16:45.166 "auth": { 00:16:45.166 "state": "completed", 00:16:45.166 "digest": "sha512", 00:16:45.166 "dhgroup": "null" 00:16:45.166 } 00:16:45.166 } 00:16:45.166 ]' 00:16:45.166 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.423 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.424 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.424 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:45.424 21:38:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.424 21:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.424 21:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.424 21:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.682 21:38:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjRmYjU3OGNkYTlhZDJlOGRiOGJlMjZkZTQ5NzhhMmbYGZ25: --dhchap-ctrl-secret DHHC-1:02:Nzc2NzdkMmY2ZDQ4ODBjMWE1OWIyNmJjMWIxYjRkZWIyMTgyMzY2YTcxZjc1MjYx6f8kzw==: 00:16:46.618 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.618 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:46.618 21:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.618 21:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.618 21:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.618 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.618 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:46.618 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:46.876 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:16:46.876 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.876 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:46.876 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:46.876 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:46.876 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.876 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.876 21:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.876 21:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.876 21:38:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.876 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.876 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.445 00:16:47.445 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.445 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.445 21:38:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.703 21:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.704 21:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.704 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.704 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.704 21:38:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.704 21:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.704 { 00:16:47.704 "cntlid": 101, 00:16:47.704 "qid": 0, 00:16:47.704 "state": "enabled", 00:16:47.704 "thread": "nvmf_tgt_poll_group_000", 00:16:47.704 "listen_address": { 00:16:47.704 "trtype": "TCP", 00:16:47.704 "adrfam": "IPv4", 00:16:47.704 "traddr": "10.0.0.2", 00:16:47.704 "trsvcid": "4420" 00:16:47.704 }, 00:16:47.704 "peer_address": { 00:16:47.704 "trtype": "TCP", 00:16:47.704 "adrfam": "IPv4", 00:16:47.704 "traddr": "10.0.0.1", 00:16:47.704 "trsvcid": "41008" 00:16:47.704 }, 00:16:47.704 "auth": { 00:16:47.704 "state": "completed", 00:16:47.704 "digest": "sha512", 00:16:47.704 "dhgroup": "null" 00:16:47.704 } 00:16:47.704 } 00:16:47.704 ]' 00:16:47.704 21:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.704 21:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.704 21:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.704 21:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:47.704 21:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.704 21:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.704 21:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.704 21:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.963 21:38:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZGNmYWIwNzczNmM5NGIyN2YwN2FmZjFlZmUwMDgzZjE5NGEwMDg3YWUwMWExOTNkGB+0Uw==: --dhchap-ctrl-secret DHHC-1:01:YTI5ZGYwYmM1M2QxODI2NzYyOWNmNjU0Yjk1OWIzYjWP0gKF: 00:16:49.342 21:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.342 21:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:49.342 21:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.342 21:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.342 21:38:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.342 21:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.342 21:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:49.342 21:38:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:49.342 21:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:16:49.342 21:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.342 21:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:49.342 21:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:49.342 21:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:49.342 21:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.342 21:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:16:49.342 21:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.342 21:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.342 21:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.342 21:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.342 21:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.905 00:16:49.905 21:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.905 21:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.905 21:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.163 21:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.163 21:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.163 21:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.163 21:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.163 21:38:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.163 21:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.163 { 00:16:50.163 "cntlid": 103, 00:16:50.163 "qid": 0, 00:16:50.163 "state": "enabled", 00:16:50.163 "thread": "nvmf_tgt_poll_group_000", 00:16:50.163 "listen_address": { 00:16:50.163 "trtype": "TCP", 00:16:50.163 "adrfam": "IPv4", 00:16:50.163 "traddr": "10.0.0.2", 00:16:50.163 "trsvcid": "4420" 00:16:50.163 }, 00:16:50.163 "peer_address": { 00:16:50.163 "trtype": "TCP", 00:16:50.163 "adrfam": "IPv4", 00:16:50.163 "traddr": "10.0.0.1", 00:16:50.163 "trsvcid": "41028" 00:16:50.163 }, 00:16:50.163 "auth": { 00:16:50.163 "state": "completed", 00:16:50.163 "digest": "sha512", 00:16:50.163 "dhgroup": "null" 00:16:50.163 } 00:16:50.163 } 00:16:50.163 ]' 00:16:50.163 21:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.163 21:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.163 21:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.163 21:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:50.163 21:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.163 21:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.163 21:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.163 21:38:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.730 21:38:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MWRkOTdmZTM4OWJhZThlNzMyNWJjNGYwMTZkZjg4MDI4MDI4ODU1MzY2MTM4MGEyYjlmMTE0Y2M2N2MzMTYzOBm7Qq0=: 00:16:51.667 21:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.667 21:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:51.667 21:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.667 21:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.667 21:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.667 21:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.667 21:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.667 21:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:51.667 21:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:51.925 21:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:16:51.925 21:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.925 21:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:51.925 21:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:51.925 21:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:51.925 21:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.925 21:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.925 21:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.925 21:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.925 21:38:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.925 21:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.925 21:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.184 00:16:52.184 21:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.184 21:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.184 21:38:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.444 21:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.444 21:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.444 21:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.444 21:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.702 21:38:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.702 21:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.702 { 00:16:52.702 "cntlid": 105, 00:16:52.702 "qid": 0, 00:16:52.702 "state": "enabled", 00:16:52.702 "thread": "nvmf_tgt_poll_group_000", 00:16:52.702 "listen_address": { 00:16:52.702 "trtype": "TCP", 00:16:52.702 "adrfam": "IPv4", 00:16:52.702 "traddr": "10.0.0.2", 00:16:52.702 "trsvcid": "4420" 00:16:52.702 }, 00:16:52.702 "peer_address": { 00:16:52.702 "trtype": "TCP", 00:16:52.702 "adrfam": "IPv4", 00:16:52.702 "traddr": "10.0.0.1", 00:16:52.702 "trsvcid": "41054" 00:16:52.702 }, 00:16:52.702 "auth": { 00:16:52.702 "state": "completed", 00:16:52.702 "digest": "sha512", 00:16:52.702 "dhgroup": "ffdhe2048" 00:16:52.702 } 00:16:52.702 } 00:16:52.702 ]' 00:16:52.702 21:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.703 21:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.703 21:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.703 21:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:52.703 21:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.703 21:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.703 21:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.703 21:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.961 21:38:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MWRlYjQwYzQ5NGFlZDdiMGY0NDNiYzcyMmFlYTYxZmZiYWQ4NDIwNjVmNDQxYmI2HlkFWQ==: --dhchap-ctrl-secret DHHC-1:03:NTEzZDQ3YzNkMzM3M2RlY2VlMDllNTQ3YmFjODliN2NkMWQ1OTE4NDY1NjM4MGM0ZjJjZGVmOTA4ODQyNTg4YuIE/eU=: 00:16:54.339 21:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.339 21:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:54.339 21:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.339 21:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.339 21:38:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.339 21:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.339 21:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:54.339 21:38:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:54.339 21:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:16:54.339 21:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.339 21:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:54.340 21:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:54.340 21:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:54.340 21:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.340 21:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.340 21:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.340 21:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.340 21:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.340 21:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.340 21:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.908 00:16:54.908 21:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.908 21:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.908 21:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.167 21:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.167 21:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.167 21:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.167 21:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.167 21:38:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.167 21:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.167 { 00:16:55.167 "cntlid": 107, 00:16:55.167 "qid": 0, 00:16:55.167 "state": "enabled", 00:16:55.167 "thread": "nvmf_tgt_poll_group_000", 00:16:55.167 "listen_address": { 00:16:55.167 "trtype": "TCP", 00:16:55.167 "adrfam": "IPv4", 00:16:55.167 "traddr": "10.0.0.2", 00:16:55.167 "trsvcid": "4420" 00:16:55.167 }, 00:16:55.167 "peer_address": { 00:16:55.167 "trtype": "TCP", 00:16:55.167 "adrfam": "IPv4", 00:16:55.167 "traddr": "10.0.0.1", 00:16:55.167 "trsvcid": "41100" 00:16:55.167 }, 00:16:55.167 "auth": { 00:16:55.167 "state": "completed", 00:16:55.167 "digest": "sha512", 00:16:55.167 "dhgroup": "ffdhe2048" 00:16:55.167 } 00:16:55.167 } 00:16:55.167 ]' 00:16:55.167 21:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.167 21:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.167 21:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.167 21:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:55.167 21:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.167 21:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.167 21:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.167 21:38:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.427 21:38:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjRmYjU3OGNkYTlhZDJlOGRiOGJlMjZkZTQ5NzhhMmbYGZ25: --dhchap-ctrl-secret DHHC-1:02:Nzc2NzdkMmY2ZDQ4ODBjMWE1OWIyNmJjMWIxYjRkZWIyMTgyMzY2YTcxZjc1MjYx6f8kzw==: 00:16:56.804 21:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.804 21:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:56.804 21:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.804 21:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.804 21:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.804 21:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.804 21:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:56.804 21:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:56.804 21:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:56.804 21:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.804 21:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:56.804 21:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:56.804 21:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:56.804 21:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.804 21:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.804 21:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.804 21:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.804 21:38:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.804 21:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.804 21:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.373 00:16:57.373 21:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.373 21:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.373 21:38:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.631 21:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.631 21:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.631 21:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.631 21:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.631 21:38:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.631 21:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.631 { 00:16:57.631 "cntlid": 109, 00:16:57.631 "qid": 0, 00:16:57.631 "state": "enabled", 00:16:57.631 "thread": "nvmf_tgt_poll_group_000", 00:16:57.631 "listen_address": { 00:16:57.631 "trtype": "TCP", 00:16:57.631 "adrfam": "IPv4", 00:16:57.631 "traddr": "10.0.0.2", 00:16:57.631 "trsvcid": "4420" 00:16:57.631 }, 00:16:57.631 "peer_address": { 00:16:57.631 "trtype": "TCP", 00:16:57.631 "adrfam": "IPv4", 00:16:57.631 "traddr": "10.0.0.1", 00:16:57.631 "trsvcid": "47730" 00:16:57.631 }, 00:16:57.631 "auth": { 00:16:57.631 "state": "completed", 00:16:57.631 "digest": "sha512", 00:16:57.631 "dhgroup": "ffdhe2048" 00:16:57.631 } 00:16:57.631 } 00:16:57.631 ]' 00:16:57.631 21:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.631 21:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.631 21:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.631 21:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:57.631 21:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.631 21:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.631 21:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.631 21:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.891 21:38:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZGNmYWIwNzczNmM5NGIyN2YwN2FmZjFlZmUwMDgzZjE5NGEwMDg3YWUwMWExOTNkGB+0Uw==: --dhchap-ctrl-secret DHHC-1:01:YTI5ZGYwYmM1M2QxODI2NzYyOWNmNjU0Yjk1OWIzYjWP0gKF: 00:16:58.828 21:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.086 21:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:59.086 21:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.086 21:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.086 21:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.086 21:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.086 21:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:59.086 21:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:59.345 21:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:59.345 21:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.345 21:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:59.345 21:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:59.345 21:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:59.345 21:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.345 21:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:16:59.345 21:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.345 21:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.345 21:38:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.345 21:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.345 21:38:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.603 00:16:59.603 21:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.603 21:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.603 21:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.861 21:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.861 21:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.861 21:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.861 21:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.861 21:38:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.861 21:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.861 { 00:16:59.861 "cntlid": 111, 00:16:59.861 "qid": 0, 00:16:59.861 "state": "enabled", 00:16:59.861 "thread": "nvmf_tgt_poll_group_000", 00:16:59.861 "listen_address": { 00:16:59.861 "trtype": "TCP", 00:16:59.861 "adrfam": "IPv4", 00:16:59.861 "traddr": "10.0.0.2", 00:16:59.861 "trsvcid": "4420" 00:16:59.861 }, 00:16:59.861 "peer_address": { 00:16:59.861 "trtype": "TCP", 00:16:59.861 "adrfam": "IPv4", 00:16:59.861 "traddr": "10.0.0.1", 00:16:59.861 "trsvcid": "47744" 00:16:59.861 }, 00:16:59.861 "auth": { 00:16:59.861 "state": "completed", 00:16:59.861 "digest": "sha512", 00:16:59.861 "dhgroup": "ffdhe2048" 00:16:59.861 } 00:16:59.861 } 00:16:59.861 ]' 00:16:59.861 21:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.120 21:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.120 21:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.120 21:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:00.120 21:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.120 21:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.120 21:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.120 21:38:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.378 21:38:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MWRkOTdmZTM4OWJhZThlNzMyNWJjNGYwMTZkZjg4MDI4MDI4ODU1MzY2MTM4MGEyYjlmMTE0Y2M2N2MzMTYzOBm7Qq0=: 00:17:01.314 21:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.314 21:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:01.314 21:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.314 21:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.314 21:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.314 21:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.314 21:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.314 21:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:01.314 21:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:01.573 21:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:01.573 21:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.573 21:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:01.573 21:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:01.573 21:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:01.573 21:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.573 21:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.573 21:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.573 21:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.573 21:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.573 21:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.573 21:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.140 00:17:02.141 21:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.141 21:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.141 21:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.400 21:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.400 21:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.400 21:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.400 21:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.400 21:38:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.400 21:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.400 { 00:17:02.400 "cntlid": 113, 00:17:02.400 "qid": 0, 00:17:02.400 "state": "enabled", 00:17:02.400 "thread": "nvmf_tgt_poll_group_000", 00:17:02.400 "listen_address": { 00:17:02.400 "trtype": "TCP", 00:17:02.400 "adrfam": "IPv4", 00:17:02.400 "traddr": "10.0.0.2", 00:17:02.400 "trsvcid": "4420" 00:17:02.400 }, 00:17:02.400 "peer_address": { 00:17:02.400 "trtype": "TCP", 00:17:02.400 "adrfam": "IPv4", 00:17:02.400 "traddr": "10.0.0.1", 00:17:02.400 "trsvcid": "47764" 00:17:02.400 }, 00:17:02.400 "auth": { 00:17:02.400 "state": "completed", 00:17:02.400 "digest": "sha512", 00:17:02.400 "dhgroup": "ffdhe3072" 00:17:02.400 } 00:17:02.400 } 00:17:02.400 ]' 00:17:02.400 21:38:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.400 21:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.400 21:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.400 21:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:02.400 21:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.400 21:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.400 21:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.400 21:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.676 21:38:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MWRlYjQwYzQ5NGFlZDdiMGY0NDNiYzcyMmFlYTYxZmZiYWQ4NDIwNjVmNDQxYmI2HlkFWQ==: --dhchap-ctrl-secret DHHC-1:03:NTEzZDQ3YzNkMzM3M2RlY2VlMDllNTQ3YmFjODliN2NkMWQ1OTE4NDY1NjM4MGM0ZjJjZGVmOTA4ODQyNTg4YuIE/eU=: 00:17:04.057 21:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.057 21:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:04.057 21:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.057 21:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.057 21:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.057 21:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.057 21:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:04.057 21:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:04.057 21:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:04.057 21:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.057 21:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:04.057 21:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:04.057 21:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:04.057 21:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.057 21:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.057 21:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.057 21:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.057 21:38:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.057 21:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.057 21:38:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.314 00:17:04.315 21:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.315 21:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.315 21:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.880 21:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.880 21:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.880 21:38:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.880 21:38:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.880 21:38:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.880 21:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.880 { 00:17:04.880 "cntlid": 115, 00:17:04.880 "qid": 0, 00:17:04.880 "state": "enabled", 00:17:04.880 "thread": "nvmf_tgt_poll_group_000", 00:17:04.880 "listen_address": { 00:17:04.880 "trtype": "TCP", 00:17:04.880 "adrfam": "IPv4", 00:17:04.880 "traddr": "10.0.0.2", 00:17:04.880 "trsvcid": "4420" 00:17:04.880 }, 00:17:04.880 "peer_address": { 00:17:04.880 "trtype": "TCP", 00:17:04.880 "adrfam": "IPv4", 00:17:04.880 "traddr": "10.0.0.1", 00:17:04.880 "trsvcid": "47782" 00:17:04.880 }, 00:17:04.880 "auth": { 00:17:04.880 "state": "completed", 00:17:04.880 "digest": "sha512", 00:17:04.880 "dhgroup": "ffdhe3072" 00:17:04.880 } 00:17:04.880 } 00:17:04.880 ]' 00:17:04.880 21:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.880 21:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.880 21:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.880 21:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:04.880 21:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.880 21:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.880 21:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.880 21:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.139 21:38:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjRmYjU3OGNkYTlhZDJlOGRiOGJlMjZkZTQ5NzhhMmbYGZ25: --dhchap-ctrl-secret DHHC-1:02:Nzc2NzdkMmY2ZDQ4ODBjMWE1OWIyNmJjMWIxYjRkZWIyMTgyMzY2YTcxZjc1MjYx6f8kzw==: 00:17:06.074 21:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.075 21:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:06.075 21:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.075 21:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.075 21:38:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.075 21:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.075 21:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:06.075 21:38:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:06.644 21:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:06.644 21:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.644 21:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:06.644 21:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:06.644 21:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:06.644 21:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.644 21:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.644 21:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.644 21:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.644 21:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.644 21:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.644 21:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.903 00:17:06.903 21:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.903 21:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.903 21:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.161 21:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.161 21:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.161 21:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.161 21:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.161 21:38:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.161 21:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.161 { 00:17:07.161 "cntlid": 117, 00:17:07.161 "qid": 0, 00:17:07.161 "state": "enabled", 00:17:07.161 "thread": "nvmf_tgt_poll_group_000", 00:17:07.161 "listen_address": { 00:17:07.161 "trtype": "TCP", 00:17:07.161 "adrfam": "IPv4", 00:17:07.161 "traddr": "10.0.0.2", 00:17:07.161 "trsvcid": "4420" 00:17:07.161 }, 00:17:07.161 "peer_address": { 00:17:07.161 "trtype": "TCP", 00:17:07.161 "adrfam": "IPv4", 00:17:07.161 "traddr": "10.0.0.1", 00:17:07.161 "trsvcid": "51198" 00:17:07.161 }, 00:17:07.161 "auth": { 00:17:07.161 "state": "completed", 00:17:07.161 "digest": "sha512", 00:17:07.161 "dhgroup": "ffdhe3072" 00:17:07.161 } 00:17:07.161 } 00:17:07.161 ]' 00:17:07.161 21:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.161 21:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.161 21:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.161 21:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:07.161 21:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.161 21:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.161 21:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.161 21:38:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.419 21:38:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZGNmYWIwNzczNmM5NGIyN2YwN2FmZjFlZmUwMDgzZjE5NGEwMDg3YWUwMWExOTNkGB+0Uw==: --dhchap-ctrl-secret DHHC-1:01:YTI5ZGYwYmM1M2QxODI2NzYyOWNmNjU0Yjk1OWIzYjWP0gKF: 00:17:08.353 21:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.353 21:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:08.353 21:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.353 21:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.353 21:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.353 21:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.353 21:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:08.353 21:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:08.611 21:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:17:08.611 21:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.611 21:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:08.611 21:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:08.611 21:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:08.611 21:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.611 21:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:17:08.611 21:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.611 21:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.611 21:38:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.611 21:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:08.611 21:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.178 00:17:09.178 21:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.178 21:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.178 21:38:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.436 21:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.436 21:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.436 21:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.436 21:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.436 21:39:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.436 21:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.436 { 00:17:09.436 "cntlid": 119, 00:17:09.436 "qid": 0, 00:17:09.436 "state": "enabled", 00:17:09.436 "thread": "nvmf_tgt_poll_group_000", 00:17:09.436 "listen_address": { 00:17:09.436 "trtype": "TCP", 00:17:09.436 "adrfam": "IPv4", 00:17:09.436 "traddr": "10.0.0.2", 00:17:09.436 "trsvcid": "4420" 00:17:09.436 }, 00:17:09.436 "peer_address": { 00:17:09.436 "trtype": "TCP", 00:17:09.436 "adrfam": "IPv4", 00:17:09.436 "traddr": "10.0.0.1", 00:17:09.436 "trsvcid": "51224" 00:17:09.436 }, 00:17:09.436 "auth": { 00:17:09.436 "state": "completed", 00:17:09.436 "digest": "sha512", 00:17:09.436 "dhgroup": "ffdhe3072" 00:17:09.436 } 00:17:09.436 } 00:17:09.436 ]' 00:17:09.436 21:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.436 21:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.436 21:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.436 21:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:09.436 21:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.436 21:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.437 21:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.437 21:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.694 21:39:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MWRkOTdmZTM4OWJhZThlNzMyNWJjNGYwMTZkZjg4MDI4MDI4ODU1MzY2MTM4MGEyYjlmMTE0Y2M2N2MzMTYzOBm7Qq0=: 00:17:10.701 21:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.701 21:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:10.701 21:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.701 21:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.701 21:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.701 21:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.701 21:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.701 21:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:10.701 21:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:10.999 21:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:17:10.999 21:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.999 21:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:10.999 21:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:10.999 21:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:10.999 21:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.999 21:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.999 21:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.999 21:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.999 21:39:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.999 21:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.999 21:39:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.569 00:17:11.569 21:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.569 21:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.569 21:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.826 21:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.826 21:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.826 21:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.826 21:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.826 21:39:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.826 21:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.826 { 00:17:11.826 "cntlid": 121, 00:17:11.826 "qid": 0, 00:17:11.826 "state": "enabled", 00:17:11.826 "thread": "nvmf_tgt_poll_group_000", 00:17:11.826 "listen_address": { 00:17:11.826 "trtype": "TCP", 00:17:11.826 "adrfam": "IPv4", 00:17:11.826 "traddr": "10.0.0.2", 00:17:11.826 "trsvcid": "4420" 00:17:11.826 }, 00:17:11.826 "peer_address": { 00:17:11.826 "trtype": "TCP", 00:17:11.826 "adrfam": "IPv4", 00:17:11.826 "traddr": "10.0.0.1", 00:17:11.826 "trsvcid": "51268" 00:17:11.826 }, 00:17:11.826 "auth": { 00:17:11.826 "state": "completed", 00:17:11.826 "digest": "sha512", 00:17:11.826 "dhgroup": "ffdhe4096" 00:17:11.826 } 00:17:11.826 } 00:17:11.826 ]' 00:17:11.826 21:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.826 21:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.826 21:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.826 21:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:11.826 21:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.826 21:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.826 21:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.826 21:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.394 21:39:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MWRlYjQwYzQ5NGFlZDdiMGY0NDNiYzcyMmFlYTYxZmZiYWQ4NDIwNjVmNDQxYmI2HlkFWQ==: --dhchap-ctrl-secret DHHC-1:03:NTEzZDQ3YzNkMzM3M2RlY2VlMDllNTQ3YmFjODliN2NkMWQ1OTE4NDY1NjM4MGM0ZjJjZGVmOTA4ODQyNTg4YuIE/eU=: 00:17:13.328 21:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.328 21:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:13.328 21:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.328 21:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.328 21:39:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.328 21:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.328 21:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:13.328 21:39:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:13.586 21:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:17:13.586 21:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.586 21:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:13.586 21:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:13.586 21:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:13.586 21:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.586 21:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.586 21:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.586 21:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.586 21:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.586 21:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.586 21:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.844 00:17:13.844 21:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.844 21:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.844 21:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.411 21:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.411 21:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.411 21:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.411 21:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.411 21:39:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.411 21:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.411 { 00:17:14.411 "cntlid": 123, 00:17:14.411 "qid": 0, 00:17:14.411 "state": "enabled", 00:17:14.411 "thread": "nvmf_tgt_poll_group_000", 00:17:14.411 "listen_address": { 00:17:14.411 "trtype": "TCP", 00:17:14.411 "adrfam": "IPv4", 00:17:14.411 "traddr": "10.0.0.2", 00:17:14.411 "trsvcid": "4420" 00:17:14.411 }, 00:17:14.411 "peer_address": { 00:17:14.411 "trtype": "TCP", 00:17:14.411 "adrfam": "IPv4", 00:17:14.411 "traddr": "10.0.0.1", 00:17:14.411 "trsvcid": "51274" 00:17:14.411 }, 00:17:14.411 "auth": { 00:17:14.411 "state": "completed", 00:17:14.411 "digest": "sha512", 00:17:14.411 "dhgroup": "ffdhe4096" 00:17:14.411 } 00:17:14.411 } 00:17:14.411 ]' 00:17:14.411 21:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.411 21:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.411 21:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.411 21:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:14.411 21:39:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.411 21:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.411 21:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.411 21:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.669 21:39:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjRmYjU3OGNkYTlhZDJlOGRiOGJlMjZkZTQ5NzhhMmbYGZ25: --dhchap-ctrl-secret DHHC-1:02:Nzc2NzdkMmY2ZDQ4ODBjMWE1OWIyNmJjMWIxYjRkZWIyMTgyMzY2YTcxZjc1MjYx6f8kzw==: 00:17:15.602 21:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.602 21:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:15.602 21:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.602 21:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.602 21:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.602 21:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.602 21:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:15.602 21:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:16.167 21:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:17:16.167 21:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.167 21:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:16.167 21:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:16.167 21:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:16.168 21:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.168 21:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.168 21:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.168 21:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.168 21:39:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.168 21:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.168 21:39:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.425 00:17:16.425 21:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.425 21:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.425 21:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.684 21:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.684 21:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.684 21:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.684 21:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.684 21:39:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.684 21:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.684 { 00:17:16.684 "cntlid": 125, 00:17:16.684 "qid": 0, 00:17:16.684 "state": "enabled", 00:17:16.684 "thread": "nvmf_tgt_poll_group_000", 00:17:16.684 "listen_address": { 00:17:16.684 "trtype": "TCP", 00:17:16.684 "adrfam": "IPv4", 00:17:16.684 "traddr": "10.0.0.2", 00:17:16.684 "trsvcid": "4420" 00:17:16.684 }, 00:17:16.684 "peer_address": { 00:17:16.684 "trtype": "TCP", 00:17:16.684 "adrfam": "IPv4", 00:17:16.684 "traddr": "10.0.0.1", 00:17:16.684 "trsvcid": "34114" 00:17:16.684 }, 00:17:16.684 "auth": { 00:17:16.684 "state": "completed", 00:17:16.684 "digest": "sha512", 00:17:16.684 "dhgroup": "ffdhe4096" 00:17:16.684 } 00:17:16.684 } 00:17:16.684 ]' 00:17:16.684 21:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.684 21:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.684 21:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.943 21:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:16.943 21:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.943 21:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.943 21:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.943 21:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.201 21:39:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZGNmYWIwNzczNmM5NGIyN2YwN2FmZjFlZmUwMDgzZjE5NGEwMDg3YWUwMWExOTNkGB+0Uw==: --dhchap-ctrl-secret DHHC-1:01:YTI5ZGYwYmM1M2QxODI2NzYyOWNmNjU0Yjk1OWIzYjWP0gKF: 00:17:18.134 21:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.134 21:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:18.134 21:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.134 21:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.134 21:39:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.134 21:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.134 21:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:18.134 21:39:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:18.699 21:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:17:18.699 21:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.699 21:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:18.699 21:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:18.699 21:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:18.699 21:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.699 21:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:17:18.699 21:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.699 21:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.699 21:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.699 21:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.699 21:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.958 00:17:18.958 21:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.958 21:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.958 21:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.216 21:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.216 21:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.216 21:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.216 21:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.216 21:39:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.216 21:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.216 { 00:17:19.216 "cntlid": 127, 00:17:19.216 "qid": 0, 00:17:19.216 "state": "enabled", 00:17:19.216 "thread": "nvmf_tgt_poll_group_000", 00:17:19.216 "listen_address": { 00:17:19.216 "trtype": "TCP", 00:17:19.216 "adrfam": "IPv4", 00:17:19.216 "traddr": "10.0.0.2", 00:17:19.216 "trsvcid": "4420" 00:17:19.216 }, 00:17:19.216 "peer_address": { 00:17:19.216 "trtype": "TCP", 00:17:19.216 "adrfam": "IPv4", 00:17:19.216 "traddr": "10.0.0.1", 00:17:19.216 "trsvcid": "34128" 00:17:19.216 }, 00:17:19.216 "auth": { 00:17:19.216 "state": "completed", 00:17:19.216 "digest": "sha512", 00:17:19.216 "dhgroup": "ffdhe4096" 00:17:19.216 } 00:17:19.216 } 00:17:19.216 ]' 00:17:19.216 21:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.216 21:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.216 21:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.216 21:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:19.216 21:39:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.474 21:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.474 21:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.474 21:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.731 21:39:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MWRkOTdmZTM4OWJhZThlNzMyNWJjNGYwMTZkZjg4MDI4MDI4ODU1MzY2MTM4MGEyYjlmMTE0Y2M2N2MzMTYzOBm7Qq0=: 00:17:20.664 21:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.664 21:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:20.664 21:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.664 21:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.664 21:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.664 21:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.664 21:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.664 21:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:20.664 21:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:20.922 21:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:17:20.922 21:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.922 21:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:20.922 21:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:20.922 21:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:20.922 21:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.922 21:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.922 21:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.922 21:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.922 21:39:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.922 21:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.922 21:39:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.487 00:17:21.487 21:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.487 21:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.487 21:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.051 21:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.051 21:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.051 21:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.051 21:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.051 21:39:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.051 21:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.051 { 00:17:22.051 "cntlid": 129, 00:17:22.051 "qid": 0, 00:17:22.051 "state": "enabled", 00:17:22.051 "thread": "nvmf_tgt_poll_group_000", 00:17:22.051 "listen_address": { 00:17:22.051 "trtype": "TCP", 00:17:22.051 "adrfam": "IPv4", 00:17:22.051 "traddr": "10.0.0.2", 00:17:22.051 "trsvcid": "4420" 00:17:22.051 }, 00:17:22.051 "peer_address": { 00:17:22.051 "trtype": "TCP", 00:17:22.051 "adrfam": "IPv4", 00:17:22.051 "traddr": "10.0.0.1", 00:17:22.051 "trsvcid": "34154" 00:17:22.051 }, 00:17:22.051 "auth": { 00:17:22.051 "state": "completed", 00:17:22.051 "digest": "sha512", 00:17:22.051 "dhgroup": "ffdhe6144" 00:17:22.051 } 00:17:22.051 } 00:17:22.051 ]' 00:17:22.051 21:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.051 21:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.051 21:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.051 21:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:22.051 21:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.051 21:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.051 21:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.051 21:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.309 21:39:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MWRlYjQwYzQ5NGFlZDdiMGY0NDNiYzcyMmFlYTYxZmZiYWQ4NDIwNjVmNDQxYmI2HlkFWQ==: --dhchap-ctrl-secret DHHC-1:03:NTEzZDQ3YzNkMzM3M2RlY2VlMDllNTQ3YmFjODliN2NkMWQ1OTE4NDY1NjM4MGM0ZjJjZGVmOTA4ODQyNTg4YuIE/eU=: 00:17:23.246 21:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.246 21:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:23.246 21:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.246 21:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.246 21:39:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.246 21:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.246 21:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:23.246 21:39:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:23.503 21:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:17:23.503 21:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.503 21:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:23.503 21:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:23.503 21:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:23.503 21:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.503 21:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.503 21:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.503 21:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.503 21:39:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.503 21:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.503 21:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.067 00:17:24.067 21:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.067 21:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.067 21:39:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.325 21:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.325 21:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.325 21:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.325 21:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.325 21:39:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.325 21:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.325 { 00:17:24.325 "cntlid": 131, 00:17:24.325 "qid": 0, 00:17:24.325 "state": "enabled", 00:17:24.325 "thread": "nvmf_tgt_poll_group_000", 00:17:24.325 "listen_address": { 00:17:24.325 "trtype": "TCP", 00:17:24.325 "adrfam": "IPv4", 00:17:24.325 "traddr": "10.0.0.2", 00:17:24.325 "trsvcid": "4420" 00:17:24.325 }, 00:17:24.325 "peer_address": { 00:17:24.325 "trtype": "TCP", 00:17:24.325 "adrfam": "IPv4", 00:17:24.325 "traddr": "10.0.0.1", 00:17:24.325 "trsvcid": "34168" 00:17:24.325 }, 00:17:24.325 "auth": { 00:17:24.325 "state": "completed", 00:17:24.325 "digest": "sha512", 00:17:24.325 "dhgroup": "ffdhe6144" 00:17:24.325 } 00:17:24.325 } 00:17:24.325 ]' 00:17:24.325 21:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.583 21:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.583 21:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.583 21:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:24.583 21:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.583 21:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.583 21:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.583 21:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.840 21:39:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjRmYjU3OGNkYTlhZDJlOGRiOGJlMjZkZTQ5NzhhMmbYGZ25: --dhchap-ctrl-secret DHHC-1:02:Nzc2NzdkMmY2ZDQ4ODBjMWE1OWIyNmJjMWIxYjRkZWIyMTgyMzY2YTcxZjc1MjYx6f8kzw==: 00:17:25.773 21:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.031 21:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:26.031 21:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.031 21:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.031 21:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.031 21:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.031 21:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:26.031 21:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:26.289 21:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:17:26.289 21:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.289 21:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:26.289 21:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:26.289 21:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:26.289 21:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.290 21:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.290 21:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.290 21:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.290 21:39:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.290 21:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.290 21:39:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.856 00:17:26.856 21:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.856 21:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.856 21:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.113 21:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.113 21:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.113 21:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.114 21:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.114 21:39:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.114 21:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.114 { 00:17:27.114 "cntlid": 133, 00:17:27.114 "qid": 0, 00:17:27.114 "state": "enabled", 00:17:27.114 "thread": "nvmf_tgt_poll_group_000", 00:17:27.114 "listen_address": { 00:17:27.114 "trtype": "TCP", 00:17:27.114 "adrfam": "IPv4", 00:17:27.114 "traddr": "10.0.0.2", 00:17:27.114 "trsvcid": "4420" 00:17:27.114 }, 00:17:27.114 "peer_address": { 00:17:27.114 "trtype": "TCP", 00:17:27.114 "adrfam": "IPv4", 00:17:27.114 "traddr": "10.0.0.1", 00:17:27.114 "trsvcid": "49576" 00:17:27.114 }, 00:17:27.114 "auth": { 00:17:27.114 "state": "completed", 00:17:27.114 "digest": "sha512", 00:17:27.114 "dhgroup": "ffdhe6144" 00:17:27.114 } 00:17:27.114 } 00:17:27.114 ]' 00:17:27.114 21:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.114 21:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.114 21:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.114 21:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:27.114 21:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.372 21:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.372 21:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.372 21:39:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.631 21:39:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZGNmYWIwNzczNmM5NGIyN2YwN2FmZjFlZmUwMDgzZjE5NGEwMDg3YWUwMWExOTNkGB+0Uw==: --dhchap-ctrl-secret DHHC-1:01:YTI5ZGYwYmM1M2QxODI2NzYyOWNmNjU0Yjk1OWIzYjWP0gKF: 00:17:28.565 21:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.565 21:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:28.565 21:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.565 21:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.565 21:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.565 21:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.565 21:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:28.565 21:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:28.824 21:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:17:28.824 21:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.824 21:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:28.824 21:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:28.824 21:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:28.824 21:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.824 21:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:17:28.824 21:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.824 21:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.824 21:39:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.824 21:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:28.824 21:39:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.451 00:17:29.451 21:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.451 21:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.451 21:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.709 21:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.709 21:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.709 21:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.709 21:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.709 21:39:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.709 21:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.709 { 00:17:29.709 "cntlid": 135, 00:17:29.709 "qid": 0, 00:17:29.709 "state": "enabled", 00:17:29.709 "thread": "nvmf_tgt_poll_group_000", 00:17:29.709 "listen_address": { 00:17:29.709 "trtype": "TCP", 00:17:29.709 "adrfam": "IPv4", 00:17:29.709 "traddr": "10.0.0.2", 00:17:29.709 "trsvcid": "4420" 00:17:29.709 }, 00:17:29.709 "peer_address": { 00:17:29.709 "trtype": "TCP", 00:17:29.709 "adrfam": "IPv4", 00:17:29.709 "traddr": "10.0.0.1", 00:17:29.709 "trsvcid": "49602" 00:17:29.709 }, 00:17:29.709 "auth": { 00:17:29.709 "state": "completed", 00:17:29.709 "digest": "sha512", 00:17:29.709 "dhgroup": "ffdhe6144" 00:17:29.709 } 00:17:29.709 } 00:17:29.709 ]' 00:17:29.709 21:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.709 21:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.709 21:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.966 21:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:29.966 21:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.966 21:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.966 21:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.966 21:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.223 21:39:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MWRkOTdmZTM4OWJhZThlNzMyNWJjNGYwMTZkZjg4MDI4MDI4ODU1MzY2MTM4MGEyYjlmMTE0Y2M2N2MzMTYzOBm7Qq0=: 00:17:31.153 21:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.153 21:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:31.153 21:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.153 21:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.153 21:39:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.153 21:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.153 21:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.153 21:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:31.153 21:39:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:31.716 21:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:17:31.716 21:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.716 21:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:31.716 21:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:31.716 21:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:31.716 21:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.716 21:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.716 21:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.716 21:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.716 21:39:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.716 21:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.716 21:39:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.647 00:17:32.647 21:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.647 21:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.647 21:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.647 21:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.647 21:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.647 21:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.647 21:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.647 21:39:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.647 21:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.647 { 00:17:32.647 "cntlid": 137, 00:17:32.647 "qid": 0, 00:17:32.647 "state": "enabled", 00:17:32.647 "thread": "nvmf_tgt_poll_group_000", 00:17:32.647 "listen_address": { 00:17:32.647 "trtype": "TCP", 00:17:32.647 "adrfam": "IPv4", 00:17:32.647 "traddr": "10.0.0.2", 00:17:32.647 "trsvcid": "4420" 00:17:32.647 }, 00:17:32.647 "peer_address": { 00:17:32.647 "trtype": "TCP", 00:17:32.647 "adrfam": "IPv4", 00:17:32.647 "traddr": "10.0.0.1", 00:17:32.647 "trsvcid": "49630" 00:17:32.647 }, 00:17:32.647 "auth": { 00:17:32.647 "state": "completed", 00:17:32.647 "digest": "sha512", 00:17:32.647 "dhgroup": "ffdhe8192" 00:17:32.647 } 00:17:32.647 } 00:17:32.647 ]' 00:17:32.647 21:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.903 21:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.903 21:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.903 21:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:32.903 21:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.903 21:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.903 21:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.903 21:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.159 21:39:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MWRlYjQwYzQ5NGFlZDdiMGY0NDNiYzcyMmFlYTYxZmZiYWQ4NDIwNjVmNDQxYmI2HlkFWQ==: --dhchap-ctrl-secret DHHC-1:03:NTEzZDQ3YzNkMzM3M2RlY2VlMDllNTQ3YmFjODliN2NkMWQ1OTE4NDY1NjM4MGM0ZjJjZGVmOTA4ODQyNTg4YuIE/eU=: 00:17:34.532 21:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.532 21:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:34.532 21:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.532 21:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.532 21:39:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.532 21:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.532 21:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:34.532 21:39:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:34.532 21:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:17:34.532 21:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.532 21:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:34.532 21:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:34.532 21:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:34.532 21:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.532 21:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.532 21:39:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.532 21:39:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.532 21:39:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.532 21:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.532 21:39:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.464 00:17:35.464 21:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.464 21:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.464 21:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.723 21:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.723 21:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.723 21:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.723 21:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.723 21:39:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.723 21:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.723 { 00:17:35.723 "cntlid": 139, 00:17:35.723 "qid": 0, 00:17:35.723 "state": "enabled", 00:17:35.723 "thread": "nvmf_tgt_poll_group_000", 00:17:35.723 "listen_address": { 00:17:35.723 "trtype": "TCP", 00:17:35.723 "adrfam": "IPv4", 00:17:35.723 "traddr": "10.0.0.2", 00:17:35.723 "trsvcid": "4420" 00:17:35.723 }, 00:17:35.723 "peer_address": { 00:17:35.723 "trtype": "TCP", 00:17:35.723 "adrfam": "IPv4", 00:17:35.723 "traddr": "10.0.0.1", 00:17:35.723 "trsvcid": "49666" 00:17:35.723 }, 00:17:35.723 "auth": { 00:17:35.723 "state": "completed", 00:17:35.723 "digest": "sha512", 00:17:35.723 "dhgroup": "ffdhe8192" 00:17:35.723 } 00:17:35.723 } 00:17:35.723 ]' 00:17:35.723 21:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.723 21:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.723 21:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.980 21:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:35.980 21:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.980 21:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.980 21:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.981 21:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.239 21:39:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZjRmYjU3OGNkYTlhZDJlOGRiOGJlMjZkZTQ5NzhhMmbYGZ25: --dhchap-ctrl-secret DHHC-1:02:Nzc2NzdkMmY2ZDQ4ODBjMWE1OWIyNmJjMWIxYjRkZWIyMTgyMzY2YTcxZjc1MjYx6f8kzw==: 00:17:37.172 21:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.172 21:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:37.172 21:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.172 21:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.172 21:39:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.172 21:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.172 21:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:37.172 21:39:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:37.430 21:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:17:37.430 21:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.430 21:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:37.430 21:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:37.430 21:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:37.430 21:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.430 21:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.430 21:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.430 21:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.430 21:39:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.430 21:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.430 21:39:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.363 00:17:38.363 21:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.363 21:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.363 21:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.621 21:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.621 21:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.621 21:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.621 21:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.621 21:39:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.621 21:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.621 { 00:17:38.621 "cntlid": 141, 00:17:38.621 "qid": 0, 00:17:38.621 "state": "enabled", 00:17:38.621 "thread": "nvmf_tgt_poll_group_000", 00:17:38.621 "listen_address": { 00:17:38.621 "trtype": "TCP", 00:17:38.621 "adrfam": "IPv4", 00:17:38.621 "traddr": "10.0.0.2", 00:17:38.621 "trsvcid": "4420" 00:17:38.621 }, 00:17:38.621 "peer_address": { 00:17:38.621 "trtype": "TCP", 00:17:38.621 "adrfam": "IPv4", 00:17:38.621 "traddr": "10.0.0.1", 00:17:38.621 "trsvcid": "59042" 00:17:38.621 }, 00:17:38.621 "auth": { 00:17:38.621 "state": "completed", 00:17:38.621 "digest": "sha512", 00:17:38.621 "dhgroup": "ffdhe8192" 00:17:38.621 } 00:17:38.621 } 00:17:38.621 ]' 00:17:38.621 21:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.621 21:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.621 21:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.878 21:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:38.878 21:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.878 21:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.878 21:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.878 21:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.136 21:39:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZGNmYWIwNzczNmM5NGIyN2YwN2FmZjFlZmUwMDgzZjE5NGEwMDg3YWUwMWExOTNkGB+0Uw==: --dhchap-ctrl-secret DHHC-1:01:YTI5ZGYwYmM1M2QxODI2NzYyOWNmNjU0Yjk1OWIzYjWP0gKF: 00:17:40.069 21:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.069 21:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:40.069 21:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.069 21:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.069 21:39:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.069 21:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.069 21:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:40.069 21:39:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:40.633 21:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:17:40.634 21:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.634 21:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:40.634 21:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:40.634 21:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:40.634 21:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.634 21:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:17:40.634 21:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.634 21:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.634 21:39:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.634 21:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:40.634 21:39:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.567 00:17:41.567 21:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.568 21:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.568 21:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.568 21:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.568 21:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.568 21:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.568 21:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.825 21:39:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.825 21:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.825 { 00:17:41.825 "cntlid": 143, 00:17:41.825 "qid": 0, 00:17:41.825 "state": "enabled", 00:17:41.825 "thread": "nvmf_tgt_poll_group_000", 00:17:41.825 "listen_address": { 00:17:41.825 "trtype": "TCP", 00:17:41.825 "adrfam": "IPv4", 00:17:41.825 "traddr": "10.0.0.2", 00:17:41.825 "trsvcid": "4420" 00:17:41.825 }, 00:17:41.825 "peer_address": { 00:17:41.825 "trtype": "TCP", 00:17:41.825 "adrfam": "IPv4", 00:17:41.825 "traddr": "10.0.0.1", 00:17:41.825 "trsvcid": "59070" 00:17:41.825 }, 00:17:41.825 "auth": { 00:17:41.825 "state": "completed", 00:17:41.825 "digest": "sha512", 00:17:41.825 "dhgroup": "ffdhe8192" 00:17:41.825 } 00:17:41.825 } 00:17:41.825 ]' 00:17:41.825 21:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.825 21:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.825 21:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.825 21:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:41.825 21:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.825 21:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.825 21:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.825 21:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.082 21:39:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MWRkOTdmZTM4OWJhZThlNzMyNWJjNGYwMTZkZjg4MDI4MDI4ODU1MzY2MTM4MGEyYjlmMTE0Y2M2N2MzMTYzOBm7Qq0=: 00:17:43.013 21:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.013 21:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:43.013 21:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.013 21:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.013 21:39:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.013 21:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:43.013 21:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:17:43.013 21:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:43.013 21:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:43.013 21:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:43.013 21:39:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:43.578 21:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:17:43.578 21:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.578 21:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:43.578 21:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:43.578 21:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:43.578 21:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.578 21:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.578 21:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.578 21:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.578 21:39:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.578 21:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.578 21:39:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.516 00:17:44.516 21:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.516 21:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.516 21:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.516 21:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.516 21:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.516 21:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.516 21:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.772 21:39:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.772 21:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.772 { 00:17:44.772 "cntlid": 145, 00:17:44.772 "qid": 0, 00:17:44.772 "state": "enabled", 00:17:44.772 "thread": "nvmf_tgt_poll_group_000", 00:17:44.772 "listen_address": { 00:17:44.772 "trtype": "TCP", 00:17:44.772 "adrfam": "IPv4", 00:17:44.772 "traddr": "10.0.0.2", 00:17:44.772 "trsvcid": "4420" 00:17:44.772 }, 00:17:44.772 "peer_address": { 00:17:44.772 "trtype": "TCP", 00:17:44.772 "adrfam": "IPv4", 00:17:44.772 "traddr": "10.0.0.1", 00:17:44.772 "trsvcid": "59102" 00:17:44.772 }, 00:17:44.772 "auth": { 00:17:44.772 "state": "completed", 00:17:44.772 "digest": "sha512", 00:17:44.772 "dhgroup": "ffdhe8192" 00:17:44.772 } 00:17:44.772 } 00:17:44.772 ]' 00:17:44.772 21:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.772 21:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.772 21:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.772 21:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:44.772 21:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.772 21:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.772 21:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.772 21:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.029 21:39:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MWRlYjQwYzQ5NGFlZDdiMGY0NDNiYzcyMmFlYTYxZmZiYWQ4NDIwNjVmNDQxYmI2HlkFWQ==: --dhchap-ctrl-secret DHHC-1:03:NTEzZDQ3YzNkMzM3M2RlY2VlMDllNTQ3YmFjODliN2NkMWQ1OTE4NDY1NjM4MGM0ZjJjZGVmOTA4ODQyNTg4YuIE/eU=: 00:17:46.401 21:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.401 21:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:46.401 21:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.401 21:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.401 21:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.401 21:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 00:17:46.401 21:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.401 21:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.401 21:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.401 21:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:46.401 21:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:46.401 21:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:46.401 21:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:46.401 21:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:46.401 21:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:46.401 21:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:46.401 21:39:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:46.401 21:39:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:46.966 request: 00:17:46.966 { 00:17:46.966 "name": "nvme0", 00:17:46.966 "trtype": "tcp", 00:17:46.966 "traddr": "10.0.0.2", 00:17:46.966 "adrfam": "ipv4", 00:17:46.966 "trsvcid": "4420", 00:17:46.966 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:46.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:17:46.966 "prchk_reftag": false, 00:17:46.966 "prchk_guard": false, 00:17:46.966 "hdgst": false, 00:17:46.966 "ddgst": false, 00:17:46.966 "dhchap_key": "key2", 00:17:46.966 "method": "bdev_nvme_attach_controller", 00:17:46.966 "req_id": 1 00:17:46.966 } 00:17:46.966 Got JSON-RPC error response 00:17:46.966 response: 00:17:46.966 { 00:17:46.966 "code": -5, 00:17:46.966 "message": "Input/output error" 00:17:46.966 } 00:17:46.966 21:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:46.966 21:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:46.966 21:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:46.966 21:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:46.966 21:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:46.966 21:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.966 21:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.966 21:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.966 21:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.966 21:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.966 21:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.966 21:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.966 21:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:46.966 21:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:46.966 21:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:46.966 21:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:46.966 21:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:46.966 21:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:46.966 21:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:46.966 21:39:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:46.966 21:39:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:47.894 request: 00:17:47.894 { 00:17:47.894 "name": "nvme0", 00:17:47.894 "trtype": "tcp", 00:17:47.894 "traddr": "10.0.0.2", 00:17:47.894 "adrfam": "ipv4", 00:17:47.894 "trsvcid": "4420", 00:17:47.894 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:47.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:17:47.894 "prchk_reftag": false, 00:17:47.894 "prchk_guard": false, 00:17:47.894 "hdgst": false, 00:17:47.894 "ddgst": false, 00:17:47.894 "dhchap_key": "key1", 00:17:47.894 "dhchap_ctrlr_key": "ckey2", 00:17:47.894 "method": "bdev_nvme_attach_controller", 00:17:47.894 "req_id": 1 00:17:47.894 } 00:17:47.894 Got JSON-RPC error response 00:17:47.894 response: 00:17:47.894 { 00:17:47.894 "code": -5, 00:17:47.894 "message": "Input/output error" 00:17:47.894 } 00:17:47.894 21:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:47.894 21:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:47.894 21:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:47.894 21:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:47.894 21:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:47.894 21:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.894 21:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.894 21:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.894 21:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 00:17:47.894 21:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.894 21:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.894 21:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.894 21:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.894 21:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:47.894 21:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.894 21:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:47.894 21:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.894 21:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:47.894 21:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.894 21:39:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.894 21:39:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.826 request: 00:17:48.826 { 00:17:48.826 "name": "nvme0", 00:17:48.826 "trtype": "tcp", 00:17:48.826 "traddr": "10.0.0.2", 00:17:48.826 "adrfam": "ipv4", 00:17:48.826 "trsvcid": "4420", 00:17:48.826 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:48.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:17:48.826 "prchk_reftag": false, 00:17:48.826 "prchk_guard": false, 00:17:48.826 "hdgst": false, 00:17:48.826 "ddgst": false, 00:17:48.826 "dhchap_key": "key1", 00:17:48.826 "dhchap_ctrlr_key": "ckey1", 00:17:48.826 "method": "bdev_nvme_attach_controller", 00:17:48.826 "req_id": 1 00:17:48.826 } 00:17:48.826 Got JSON-RPC error response 00:17:48.826 response: 00:17:48.826 { 00:17:48.826 "code": -5, 00:17:48.826 "message": "Input/output error" 00:17:48.826 } 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 347010 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 347010 ']' 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 347010 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 347010 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 347010' 00:17:48.826 killing process with pid 347010 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 347010 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 347010 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=366571 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 366571 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 366571 ']' 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:48.826 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.392 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:49.392 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:49.392 21:39:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:49.392 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:49.392 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.392 21:39:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.392 21:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:49.392 21:39:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 366571 00:17:49.392 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 366571 ']' 00:17:49.392 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.392 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:49.392 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.392 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:49.392 21:39:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.649 21:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:49.649 21:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:49.649 21:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:49.649 21:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.649 21:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.649 21:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.649 21:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:49.649 21:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.649 21:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:49.649 21:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:49.649 21:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:49.649 21:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.649 21:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:17:49.649 21:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.649 21:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.649 21:39:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.649 21:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.649 21:39:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:50.579 00:17:50.579 21:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.579 21:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.579 21:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.836 21:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.836 21:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.836 21:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.836 21:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.836 21:39:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.836 21:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.836 { 00:17:50.836 "cntlid": 1, 00:17:50.836 "qid": 0, 00:17:50.836 "state": "enabled", 00:17:50.836 "thread": "nvmf_tgt_poll_group_000", 00:17:50.836 "listen_address": { 00:17:50.836 "trtype": "TCP", 00:17:50.836 "adrfam": "IPv4", 00:17:50.836 "traddr": "10.0.0.2", 00:17:50.836 "trsvcid": "4420" 00:17:50.836 }, 00:17:50.836 "peer_address": { 00:17:50.836 "trtype": "TCP", 00:17:50.836 "adrfam": "IPv4", 00:17:50.836 "traddr": "10.0.0.1", 00:17:50.836 "trsvcid": "46760" 00:17:50.836 }, 00:17:50.836 "auth": { 00:17:50.836 "state": "completed", 00:17:50.836 "digest": "sha512", 00:17:50.836 "dhgroup": "ffdhe8192" 00:17:50.836 } 00:17:50.836 } 00:17:50.837 ]' 00:17:50.837 21:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.837 21:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.837 21:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.124 21:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:51.124 21:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.124 21:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.124 21:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.124 21:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.381 21:39:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:MWRkOTdmZTM4OWJhZThlNzMyNWJjNGYwMTZkZjg4MDI4MDI4ODU1MzY2MTM4MGEyYjlmMTE0Y2M2N2MzMTYzOBm7Qq0=: 00:17:52.309 21:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.309 21:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:52.309 21:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.309 21:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.309 21:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.309 21:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:17:52.309 21:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.309 21:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.309 21:39:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.309 21:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:52.309 21:39:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:52.566 21:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:52.566 21:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:52.566 21:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:52.566 21:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:52.566 21:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:52.566 21:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:52.566 21:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:52.566 21:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:52.566 21:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:52.822 request: 00:17:52.822 { 00:17:52.822 "name": "nvme0", 00:17:52.822 "trtype": "tcp", 00:17:52.822 "traddr": "10.0.0.2", 00:17:52.822 "adrfam": "ipv4", 00:17:52.822 "trsvcid": "4420", 00:17:52.822 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:52.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:17:52.822 "prchk_reftag": false, 00:17:52.822 "prchk_guard": false, 00:17:52.822 "hdgst": false, 00:17:52.822 "ddgst": false, 00:17:52.822 "dhchap_key": "key3", 00:17:52.822 "method": "bdev_nvme_attach_controller", 00:17:52.822 "req_id": 1 00:17:52.822 } 00:17:52.822 Got JSON-RPC error response 00:17:52.822 response: 00:17:52.822 { 00:17:52.822 "code": -5, 00:17:52.822 "message": "Input/output error" 00:17:52.822 } 00:17:52.822 21:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:52.822 21:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:52.822 21:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:52.822 21:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:52.822 21:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:52.822 21:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:52.822 21:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:52.822 21:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:53.079 21:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.079 21:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:53.079 21:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.079 21:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:53.079 21:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.079 21:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:53.079 21:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.079 21:39:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.079 21:39:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.335 request: 00:17:53.335 { 00:17:53.335 "name": "nvme0", 00:17:53.335 "trtype": "tcp", 00:17:53.335 "traddr": "10.0.0.2", 00:17:53.335 "adrfam": "ipv4", 00:17:53.335 "trsvcid": "4420", 00:17:53.335 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:53.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:17:53.335 "prchk_reftag": false, 00:17:53.335 "prchk_guard": false, 00:17:53.335 "hdgst": false, 00:17:53.335 "ddgst": false, 00:17:53.335 "dhchap_key": "key3", 00:17:53.335 "method": "bdev_nvme_attach_controller", 00:17:53.335 "req_id": 1 00:17:53.335 } 00:17:53.335 Got JSON-RPC error response 00:17:53.335 response: 00:17:53.335 { 00:17:53.335 "code": -5, 00:17:53.335 "message": "Input/output error" 00:17:53.335 } 00:17:53.335 21:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:53.335 21:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:53.335 21:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:53.335 21:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:53.335 21:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:53.335 21:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:53.335 21:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:53.335 21:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:53.335 21:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:53.335 21:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:53.592 21:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:53.592 21:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.592 21:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.592 21:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.592 21:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:53.592 21:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.592 21:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.592 21:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.592 21:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:53.592 21:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:53.592 21:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:53.592 21:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:53.592 21:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.592 21:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:53.592 21:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.592 21:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:53.593 21:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:53.850 request: 00:17:53.850 { 00:17:53.850 "name": "nvme0", 00:17:53.850 "trtype": "tcp", 00:17:53.850 "traddr": "10.0.0.2", 00:17:53.850 "adrfam": "ipv4", 00:17:53.850 "trsvcid": "4420", 00:17:53.850 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:53.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:17:53.850 "prchk_reftag": false, 00:17:53.850 "prchk_guard": false, 00:17:53.850 "hdgst": false, 00:17:53.850 "ddgst": false, 00:17:53.850 "dhchap_key": "key0", 00:17:53.850 "dhchap_ctrlr_key": "key1", 00:17:53.850 "method": "bdev_nvme_attach_controller", 00:17:53.850 "req_id": 1 00:17:53.850 } 00:17:53.850 Got JSON-RPC error response 00:17:53.850 response: 00:17:53.850 { 00:17:53.850 "code": -5, 00:17:53.850 "message": "Input/output error" 00:17:53.850 } 00:17:53.850 21:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:53.850 21:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:53.850 21:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:53.850 21:39:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:53.850 21:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:53.850 21:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:54.415 00:17:54.415 21:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:54.415 21:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.415 21:39:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:54.415 21:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.415 21:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.415 21:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.672 21:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:54.672 21:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:54.672 21:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 347030 00:17:54.672 21:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 347030 ']' 00:17:54.673 21:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 347030 00:17:54.673 21:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:54.673 21:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:54.673 21:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 347030 00:17:54.673 21:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:54.673 21:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:54.673 21:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 347030' 00:17:54.673 killing process with pid 347030 00:17:54.673 21:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 347030 00:17:54.673 21:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 347030 00:17:54.931 21:39:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:54.931 21:39:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:54.931 21:39:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:54.931 21:39:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:54.931 21:39:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:54.931 21:39:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:54.931 21:39:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:55.189 rmmod nvme_tcp 00:17:55.189 rmmod nvme_fabrics 00:17:55.189 rmmod nvme_keyring 00:17:55.189 21:39:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:55.189 21:39:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:55.189 21:39:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:55.189 21:39:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 366571 ']' 00:17:55.189 21:39:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 366571 00:17:55.189 21:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 366571 ']' 00:17:55.189 21:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 366571 00:17:55.189 21:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:55.189 21:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:55.189 21:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 366571 00:17:55.189 21:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:55.189 21:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:55.189 21:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 366571' 00:17:55.189 killing process with pid 366571 00:17:55.189 21:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 366571 00:17:55.189 21:39:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 366571 00:17:55.448 21:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:55.448 21:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:55.448 21:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:55.448 21:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:55.448 21:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:55.448 21:39:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.448 21:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.448 21:39:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.357 21:39:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:57.358 21:39:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Iht /tmp/spdk.key-sha256.GTn /tmp/spdk.key-sha384.0w1 /tmp/spdk.key-sha512.vqJ /tmp/spdk.key-sha512.ffB /tmp/spdk.key-sha384.nEh /tmp/spdk.key-sha256.jjK '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:57.358 00:17:57.358 real 3m26.659s 00:17:57.358 user 8m4.370s 00:17:57.358 sys 0m25.597s 00:17:57.358 21:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:57.358 21:39:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.358 ************************************ 00:17:57.358 END TEST nvmf_auth_target 00:17:57.358 ************************************ 00:17:57.358 21:39:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:57.358 21:39:48 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:17:57.358 21:39:48 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:57.358 21:39:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:57.358 21:39:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:57.358 21:39:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:57.358 ************************************ 00:17:57.358 START TEST nvmf_bdevio_no_huge 00:17:57.358 ************************************ 00:17:57.358 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:57.617 * Looking for test storage... 00:17:57.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:57.617 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:57.617 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:57.617 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.617 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.617 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.617 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.617 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.617 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.617 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.617 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.617 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.617 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.617 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:57.617 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:17:57.617 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.617 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.617 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:57.617 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.617 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:57.617 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.617 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.617 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.617 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:57.618 21:39:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:59.521 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:59.521 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:59.521 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:59.521 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:59.521 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:59.521 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:59.521 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:59.521 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:59.521 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:17:59.522 Found 0000:08:00.0 (0x8086 - 0x159b) 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:17:59.522 Found 0000:08:00.1 (0x8086 - 0x159b) 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:17:59.522 Found net devices under 0000:08:00.0: cvl_0_0 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:17:59.522 Found net devices under 0000:08:00.1: cvl_0_1 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:59.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:59.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:17:59.522 00:17:59.522 --- 10.0.0.2 ping statistics --- 00:17:59.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.522 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:59.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:59.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:17:59.522 00:17:59.522 --- 10.0.0.1 ping statistics --- 00:17:59.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.522 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:59.522 21:39:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:59.522 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:59.522 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:59.522 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:59.522 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:59.522 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=368636 00:17:59.522 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:59.522 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 368636 00:17:59.522 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 368636 ']' 00:17:59.522 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.522 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:59.522 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.522 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:59.522 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:59.522 [2024-07-15 21:39:50.068837] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:17:59.522 [2024-07-15 21:39:50.068945] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:59.522 [2024-07-15 21:39:50.141831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:59.522 [2024-07-15 21:39:50.263502] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.522 [2024-07-15 21:39:50.263562] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.523 [2024-07-15 21:39:50.263578] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.523 [2024-07-15 21:39:50.263591] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.523 [2024-07-15 21:39:50.263603] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.523 [2024-07-15 21:39:50.263692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:59.523 [2024-07-15 21:39:50.264009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:59.523 [2024-07-15 21:39:50.264059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:59.523 [2024-07-15 21:39:50.264063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:59.781 [2024-07-15 21:39:50.387058] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:59.781 Malloc0 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:59.781 [2024-07-15 21:39:50.425722] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:59.781 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:59.781 { 00:17:59.782 "params": { 00:17:59.782 "name": "Nvme$subsystem", 00:17:59.782 "trtype": "$TEST_TRANSPORT", 00:17:59.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:59.782 "adrfam": "ipv4", 00:17:59.782 "trsvcid": "$NVMF_PORT", 00:17:59.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:59.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:59.782 "hdgst": ${hdgst:-false}, 00:17:59.782 "ddgst": ${ddgst:-false} 00:17:59.782 }, 00:17:59.782 "method": "bdev_nvme_attach_controller" 00:17:59.782 } 00:17:59.782 EOF 00:17:59.782 )") 00:17:59.782 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:59.782 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:59.782 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:59.782 21:39:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:59.782 "params": { 00:17:59.782 "name": "Nvme1", 00:17:59.782 "trtype": "tcp", 00:17:59.782 "traddr": "10.0.0.2", 00:17:59.782 "adrfam": "ipv4", 00:17:59.782 "trsvcid": "4420", 00:17:59.782 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.782 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:59.782 "hdgst": false, 00:17:59.782 "ddgst": false 00:17:59.782 }, 00:17:59.782 "method": "bdev_nvme_attach_controller" 00:17:59.782 }' 00:17:59.782 [2024-07-15 21:39:50.474767] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:17:59.782 [2024-07-15 21:39:50.474867] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid368754 ] 00:17:59.782 [2024-07-15 21:39:50.539693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:00.040 [2024-07-15 21:39:50.654122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.040 [2024-07-15 21:39:50.654216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.040 [2024-07-15 21:39:50.654175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.298 I/O targets: 00:18:00.298 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:00.298 00:18:00.298 00:18:00.298 CUnit - A unit testing framework for C - Version 2.1-3 00:18:00.298 http://cunit.sourceforge.net/ 00:18:00.298 00:18:00.298 00:18:00.298 Suite: bdevio tests on: Nvme1n1 00:18:00.298 Test: blockdev write read block ...passed 00:18:00.298 Test: blockdev write zeroes read block ...passed 00:18:00.298 Test: blockdev write zeroes read no split ...passed 00:18:00.298 Test: blockdev write zeroes read split ...passed 00:18:00.298 Test: blockdev write zeroes read split partial ...passed 00:18:00.298 Test: blockdev reset ...[2024-07-15 21:39:50.967065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:00.298 [2024-07-15 21:39:50.967195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57e570 (9): Bad file descriptor 00:18:00.556 [2024-07-15 21:39:51.109667] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:00.556 passed 00:18:00.556 Test: blockdev write read 8 blocks ...passed 00:18:00.556 Test: blockdev write read size > 128k ...passed 00:18:00.556 Test: blockdev write read invalid size ...passed 00:18:00.556 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:00.556 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:00.556 Test: blockdev write read max offset ...passed 00:18:00.556 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:00.556 Test: blockdev writev readv 8 blocks ...passed 00:18:00.556 Test: blockdev writev readv 30 x 1block ...passed 00:18:00.556 Test: blockdev writev readv block ...passed 00:18:00.814 Test: blockdev writev readv size > 128k ...passed 00:18:00.814 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:00.814 Test: blockdev comparev and writev ...[2024-07-15 21:39:51.364360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:00.814 [2024-07-15 21:39:51.364471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:00.814 [2024-07-15 21:39:51.364547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:00.814 [2024-07-15 21:39:51.364591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:00.814 [2024-07-15 21:39:51.365045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:00.814 [2024-07-15 21:39:51.365125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:00.814 [2024-07-15 21:39:51.365202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:00.814 [2024-07-15 21:39:51.365264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:00.814 [2024-07-15 21:39:51.365685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:00.814 [2024-07-15 21:39:51.365767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:00.814 [2024-07-15 21:39:51.365854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:00.814 [2024-07-15 21:39:51.365898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:00.814 [2024-07-15 21:39:51.366351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:00.814 [2024-07-15 21:39:51.366414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:00.814 [2024-07-15 21:39:51.366464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:00.814 [2024-07-15 21:39:51.366514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:00.814 passed 00:18:00.814 Test: blockdev nvme passthru rw ...passed 00:18:00.814 Test: blockdev nvme passthru vendor specific ...[2024-07-15 21:39:51.448364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:00.814 [2024-07-15 21:39:51.448391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:00.814 [2024-07-15 21:39:51.448538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:00.814 [2024-07-15 21:39:51.448559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:00.814 [2024-07-15 21:39:51.448695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:00.814 [2024-07-15 21:39:51.448716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:00.814 [2024-07-15 21:39:51.448854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:00.814 [2024-07-15 21:39:51.448875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:00.814 passed 00:18:00.814 Test: blockdev nvme admin passthru ...passed 00:18:00.814 Test: blockdev copy ...passed 00:18:00.814 00:18:00.814 Run Summary: Type Total Ran Passed Failed Inactive 00:18:00.814 suites 1 1 n/a 0 0 00:18:00.814 tests 23 23 23 0 0 00:18:00.814 asserts 152 152 152 0 n/a 00:18:00.814 00:18:00.814 Elapsed time = 1.313 seconds 00:18:01.072 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:01.072 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.072 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:01.072 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.072 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:01.072 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:01.072 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:01.072 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:01.072 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:01.072 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:01.072 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:01.072 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:01.072 rmmod nvme_tcp 00:18:01.330 rmmod nvme_fabrics 00:18:01.330 rmmod nvme_keyring 00:18:01.330 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:01.330 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:01.330 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:01.330 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 368636 ']' 00:18:01.330 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 368636 00:18:01.330 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 368636 ']' 00:18:01.330 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 368636 00:18:01.330 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:18:01.330 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:01.330 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 368636 00:18:01.330 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:01.330 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:01.330 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 368636' 00:18:01.330 killing process with pid 368636 00:18:01.330 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 368636 00:18:01.330 21:39:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 368636 00:18:01.590 21:39:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:01.590 21:39:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:01.590 21:39:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:01.590 21:39:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:01.590 21:39:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:01.590 21:39:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.590 21:39:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:01.590 21:39:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.123 21:39:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:04.123 00:18:04.123 real 0m6.225s 00:18:04.123 user 0m10.717s 00:18:04.123 sys 0m2.231s 00:18:04.123 21:39:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:04.123 21:39:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:04.123 ************************************ 00:18:04.123 END TEST nvmf_bdevio_no_huge 00:18:04.123 ************************************ 00:18:04.123 21:39:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:04.123 21:39:54 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:04.123 21:39:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:04.123 21:39:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:04.123 21:39:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:04.123 ************************************ 00:18:04.123 START TEST nvmf_tls 00:18:04.123 ************************************ 00:18:04.123 21:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:04.123 * Looking for test storage... 00:18:04.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:04.123 21:39:54 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:04.123 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:04.123 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:04.123 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:04.123 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:04.123 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:04.123 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:04.123 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:04.123 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:04.123 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:04.123 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:04.123 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:04.123 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:04.123 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:04.123 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:04.123 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:04.123 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:04.123 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:04.123 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:04.123 21:39:54 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:04.123 21:39:54 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:04.124 21:39:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:05.503 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:05.503 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:05.503 Found net devices under 0000:08:00.0: cvl_0_0 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:05.503 Found net devices under 0000:08:00.1: cvl_0_1 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:05.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:05.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:18:05.503 00:18:05.503 --- 10.0.0.2 ping statistics --- 00:18:05.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.503 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:18:05.503 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:05.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:05.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:18:05.503 00:18:05.503 --- 10.0.0.1 ping statistics --- 00:18:05.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.503 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:18:05.504 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:05.504 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:05.504 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:05.504 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:05.504 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:05.504 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:05.504 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:05.504 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:05.504 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:05.504 21:39:56 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:05.504 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:05.504 21:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:05.504 21:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.504 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=370347 00:18:05.504 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:05.504 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 370347 00:18:05.504 21:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 370347 ']' 00:18:05.504 21:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.504 21:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:05.504 21:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.504 21:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:05.504 21:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.762 [2024-07-15 21:39:56.341341] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:18:05.762 [2024-07-15 21:39:56.341422] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.762 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.762 [2024-07-15 21:39:56.409220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.762 [2024-07-15 21:39:56.524786] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.762 [2024-07-15 21:39:56.524850] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.762 [2024-07-15 21:39:56.524867] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.762 [2024-07-15 21:39:56.524881] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.762 [2024-07-15 21:39:56.524893] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.762 [2024-07-15 21:39:56.524922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.021 21:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:06.021 21:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:06.021 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:06.021 21:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:06.021 21:39:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.021 21:39:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.021 21:39:56 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:06.021 21:39:56 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:06.279 true 00:18:06.279 21:39:56 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:06.279 21:39:56 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:06.537 21:39:57 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:06.537 21:39:57 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:06.537 21:39:57 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:06.794 21:39:57 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:06.794 21:39:57 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:07.052 21:39:57 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:07.052 21:39:57 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:07.052 21:39:57 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:07.310 21:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:07.310 21:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:07.568 21:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:07.568 21:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:07.568 21:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:07.568 21:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:07.827 21:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:07.827 21:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:07.827 21:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:08.085 21:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:08.085 21:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:08.343 21:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:08.343 21:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:08.343 21:39:58 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:08.601 21:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:08.601 21:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.LJPVOsPDMh 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.nEHqygOd2J 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.LJPVOsPDMh 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.nEHqygOd2J 00:18:08.858 21:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:09.116 21:39:59 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:09.373 21:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.LJPVOsPDMh 00:18:09.373 21:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.LJPVOsPDMh 00:18:09.373 21:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:09.629 [2024-07-15 21:40:00.355323] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.629 21:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:09.886 21:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:10.143 [2024-07-15 21:40:00.892755] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:10.143 [2024-07-15 21:40:00.892951] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.143 21:40:00 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:10.401 malloc0 00:18:10.401 21:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:10.659 21:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LJPVOsPDMh 00:18:10.917 [2024-07-15 21:40:01.627844] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:10.917 21:40:01 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.LJPVOsPDMh 00:18:10.917 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.142 Initializing NVMe Controllers 00:18:23.142 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:23.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:23.142 Initialization complete. Launching workers. 00:18:23.142 ======================================================== 00:18:23.142 Latency(us) 00:18:23.142 Device Information : IOPS MiB/s Average min max 00:18:23.142 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9367.26 36.59 6833.90 1411.56 8373.78 00:18:23.142 ======================================================== 00:18:23.142 Total : 9367.26 36.59 6833.90 1411.56 8373.78 00:18:23.142 00:18:23.142 21:40:11 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LJPVOsPDMh 00:18:23.142 21:40:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:23.142 21:40:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:23.142 21:40:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:23.142 21:40:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LJPVOsPDMh' 00:18:23.142 21:40:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:23.142 21:40:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=371803 00:18:23.142 21:40:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:23.142 21:40:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:23.142 21:40:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 371803 /var/tmp/bdevperf.sock 00:18:23.142 21:40:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 371803 ']' 00:18:23.142 21:40:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:23.142 21:40:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:23.142 21:40:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:23.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:23.142 21:40:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:23.142 21:40:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.142 [2024-07-15 21:40:11.795687] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:18:23.142 [2024-07-15 21:40:11.795787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid371803 ] 00:18:23.142 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.142 [2024-07-15 21:40:11.850064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.142 [2024-07-15 21:40:11.949250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.142 21:40:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:23.142 21:40:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:23.142 21:40:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LJPVOsPDMh 00:18:23.142 [2024-07-15 21:40:12.315240] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:23.142 [2024-07-15 21:40:12.315357] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:23.142 TLSTESTn1 00:18:23.142 21:40:12 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:23.142 Running I/O for 10 seconds... 00:18:33.101 00:18:33.101 Latency(us) 00:18:33.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.101 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:33.101 Verification LBA range: start 0x0 length 0x2000 00:18:33.101 TLSTESTn1 : 10.02 3600.03 14.06 0.00 0.00 35490.52 8543.95 57865.86 00:18:33.101 =================================================================================================================== 00:18:33.101 Total : 3600.03 14.06 0.00 0.00 35490.52 8543.95 57865.86 00:18:33.101 0 00:18:33.101 21:40:22 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:33.101 21:40:22 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 371803 00:18:33.101 21:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 371803 ']' 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 371803 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 371803 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 371803' 00:18:33.102 killing process with pid 371803 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 371803 00:18:33.102 Received shutdown signal, test time was about 10.000000 seconds 00:18:33.102 00:18:33.102 Latency(us) 00:18:33.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.102 =================================================================================================================== 00:18:33.102 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:33.102 [2024-07-15 21:40:22.602979] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 371803 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nEHqygOd2J 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nEHqygOd2J 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nEHqygOd2J 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nEHqygOd2J' 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=372814 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 372814 /var/tmp/bdevperf.sock 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 372814 ']' 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:33.102 21:40:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.102 [2024-07-15 21:40:22.845319] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:18:33.102 [2024-07-15 21:40:22.845415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid372814 ] 00:18:33.102 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.102 [2024-07-15 21:40:22.900511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.102 [2024-07-15 21:40:23.003050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nEHqygOd2J 00:18:33.102 [2024-07-15 21:40:23.341379] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:33.102 [2024-07-15 21:40:23.341493] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:33.102 [2024-07-15 21:40:23.347976] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:33.102 [2024-07-15 21:40:23.348191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4e470 (107): Transport endpoint is not connected 00:18:33.102 [2024-07-15 21:40:23.349175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4e470 (9): Bad file descriptor 00:18:33.102 [2024-07-15 21:40:23.350186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:33.102 [2024-07-15 21:40:23.350205] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:33.102 [2024-07-15 21:40:23.350221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:33.102 request: 00:18:33.102 { 00:18:33.102 "name": "TLSTEST", 00:18:33.102 "trtype": "tcp", 00:18:33.102 "traddr": "10.0.0.2", 00:18:33.102 "adrfam": "ipv4", 00:18:33.102 "trsvcid": "4420", 00:18:33.102 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.102 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.102 "prchk_reftag": false, 00:18:33.102 "prchk_guard": false, 00:18:33.102 "hdgst": false, 00:18:33.102 "ddgst": false, 00:18:33.102 "psk": "/tmp/tmp.nEHqygOd2J", 00:18:33.102 "method": "bdev_nvme_attach_controller", 00:18:33.102 "req_id": 1 00:18:33.102 } 00:18:33.102 Got JSON-RPC error response 00:18:33.102 response: 00:18:33.102 { 00:18:33.102 "code": -5, 00:18:33.102 "message": "Input/output error" 00:18:33.102 } 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 372814 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 372814 ']' 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 372814 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 372814 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 372814' 00:18:33.102 killing process with pid 372814 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 372814 00:18:33.102 Received shutdown signal, test time was about 10.000000 seconds 00:18:33.102 00:18:33.102 Latency(us) 00:18:33.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.102 =================================================================================================================== 00:18:33.102 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:33.102 [2024-07-15 21:40:23.386970] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 372814 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.LJPVOsPDMh 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.LJPVOsPDMh 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.LJPVOsPDMh 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LJPVOsPDMh' 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=372913 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 372913 /var/tmp/bdevperf.sock 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 372913 ']' 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:33.102 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.102 [2024-07-15 21:40:23.614155] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:18:33.102 [2024-07-15 21:40:23.614236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid372913 ] 00:18:33.102 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.102 [2024-07-15 21:40:23.663471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.103 [2024-07-15 21:40:23.767499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.103 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.103 21:40:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:33.103 21:40:23 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.LJPVOsPDMh 00:18:33.359 [2024-07-15 21:40:24.132965] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:33.359 [2024-07-15 21:40:24.133081] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:33.359 [2024-07-15 21:40:24.139893] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:33.359 [2024-07-15 21:40:24.139921] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:33.359 [2024-07-15 21:40:24.139970] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:33.359 [2024-07-15 21:40:24.140762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1176470 (107): Transport endpoint is not connected 00:18:33.359 [2024-07-15 21:40:24.141760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1176470 (9): Bad file descriptor 00:18:33.359 [2024-07-15 21:40:24.142763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:33.359 [2024-07-15 21:40:24.142786] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:33.359 [2024-07-15 21:40:24.142816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:33.359 request: 00:18:33.359 { 00:18:33.359 "name": "TLSTEST", 00:18:33.359 "trtype": "tcp", 00:18:33.359 "traddr": "10.0.0.2", 00:18:33.359 "adrfam": "ipv4", 00:18:33.359 "trsvcid": "4420", 00:18:33.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.359 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:33.359 "prchk_reftag": false, 00:18:33.359 "prchk_guard": false, 00:18:33.359 "hdgst": false, 00:18:33.359 "ddgst": false, 00:18:33.359 "psk": "/tmp/tmp.LJPVOsPDMh", 00:18:33.359 "method": "bdev_nvme_attach_controller", 00:18:33.359 "req_id": 1 00:18:33.359 } 00:18:33.359 Got JSON-RPC error response 00:18:33.359 response: 00:18:33.359 { 00:18:33.359 "code": -5, 00:18:33.359 "message": "Input/output error" 00:18:33.359 } 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 372913 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 372913 ']' 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 372913 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 372913 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 372913' 00:18:33.617 killing process with pid 372913 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 372913 00:18:33.617 Received shutdown signal, test time was about 10.000000 seconds 00:18:33.617 00:18:33.617 Latency(us) 00:18:33.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.617 =================================================================================================================== 00:18:33.617 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:33.617 [2024-07-15 21:40:24.189664] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 372913 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.LJPVOsPDMh 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.LJPVOsPDMh 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.LJPVOsPDMh 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LJPVOsPDMh' 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=372933 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 372933 /var/tmp/bdevperf.sock 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 372933 ']' 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:33.617 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.875 [2024-07-15 21:40:24.424330] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:18:33.875 [2024-07-15 21:40:24.424428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid372933 ] 00:18:33.875 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.875 [2024-07-15 21:40:24.480323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.875 [2024-07-15 21:40:24.579779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.875 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.875 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:33.875 21:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LJPVOsPDMh 00:18:34.441 [2024-07-15 21:40:24.937941] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:34.441 [2024-07-15 21:40:24.938048] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:34.441 [2024-07-15 21:40:24.942945] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:34.441 [2024-07-15 21:40:24.942974] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:34.441 [2024-07-15 21:40:24.943008] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:34.441 [2024-07-15 21:40:24.943658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2086470 (107): Transport endpoint is not connected 00:18:34.441 [2024-07-15 21:40:24.944649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2086470 (9): Bad file descriptor 00:18:34.441 [2024-07-15 21:40:24.945665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:34.441 [2024-07-15 21:40:24.945682] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:34.441 [2024-07-15 21:40:24.945699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:34.441 request: 00:18:34.441 { 00:18:34.441 "name": "TLSTEST", 00:18:34.442 "trtype": "tcp", 00:18:34.442 "traddr": "10.0.0.2", 00:18:34.442 "adrfam": "ipv4", 00:18:34.442 "trsvcid": "4420", 00:18:34.442 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:34.442 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:34.442 "prchk_reftag": false, 00:18:34.442 "prchk_guard": false, 00:18:34.442 "hdgst": false, 00:18:34.442 "ddgst": false, 00:18:34.442 "psk": "/tmp/tmp.LJPVOsPDMh", 00:18:34.442 "method": "bdev_nvme_attach_controller", 00:18:34.442 "req_id": 1 00:18:34.442 } 00:18:34.442 Got JSON-RPC error response 00:18:34.442 response: 00:18:34.442 { 00:18:34.442 "code": -5, 00:18:34.442 "message": "Input/output error" 00:18:34.442 } 00:18:34.442 21:40:24 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 372933 00:18:34.442 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 372933 ']' 00:18:34.442 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 372933 00:18:34.442 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:34.442 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:34.442 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 372933 00:18:34.442 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:34.442 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:34.442 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 372933' 00:18:34.442 killing process with pid 372933 00:18:34.442 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 372933 00:18:34.442 Received shutdown signal, test time was about 10.000000 seconds 00:18:34.442 00:18:34.442 Latency(us) 00:18:34.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.442 =================================================================================================================== 00:18:34.442 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:34.442 [2024-07-15 21:40:24.988938] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:34.442 21:40:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 372933 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=373041 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 373041 /var/tmp/bdevperf.sock 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 373041 ']' 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:34.442 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.442 [2024-07-15 21:40:25.224170] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:18:34.442 [2024-07-15 21:40:25.224267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid373041 ] 00:18:34.699 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.699 [2024-07-15 21:40:25.279604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.699 [2024-07-15 21:40:25.386204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.699 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:34.700 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:34.700 21:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:35.266 [2024-07-15 21:40:25.764153] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:35.266 [2024-07-15 21:40:25.765902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66aa20 (9): Bad file descriptor 00:18:35.266 [2024-07-15 21:40:25.766905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.266 [2024-07-15 21:40:25.766926] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:35.266 [2024-07-15 21:40:25.766944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.266 request: 00:18:35.266 { 00:18:35.266 "name": "TLSTEST", 00:18:35.266 "trtype": "tcp", 00:18:35.266 "traddr": "10.0.0.2", 00:18:35.266 "adrfam": "ipv4", 00:18:35.266 "trsvcid": "4420", 00:18:35.266 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.266 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:35.266 "prchk_reftag": false, 00:18:35.266 "prchk_guard": false, 00:18:35.266 "hdgst": false, 00:18:35.266 "ddgst": false, 00:18:35.266 "method": "bdev_nvme_attach_controller", 00:18:35.266 "req_id": 1 00:18:35.266 } 00:18:35.266 Got JSON-RPC error response 00:18:35.266 response: 00:18:35.266 { 00:18:35.266 "code": -5, 00:18:35.266 "message": "Input/output error" 00:18:35.266 } 00:18:35.266 21:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 373041 00:18:35.266 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 373041 ']' 00:18:35.266 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 373041 00:18:35.266 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:35.266 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:35.266 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 373041 00:18:35.266 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:35.266 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:35.266 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 373041' 00:18:35.266 killing process with pid 373041 00:18:35.266 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 373041 00:18:35.266 Received shutdown signal, test time was about 10.000000 seconds 00:18:35.266 00:18:35.266 Latency(us) 00:18:35.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.266 =================================================================================================================== 00:18:35.266 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:35.266 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 373041 00:18:35.266 21:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:35.266 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:35.266 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:35.266 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:35.266 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:35.266 21:40:25 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 370347 00:18:35.266 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 370347 ']' 00:18:35.266 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 370347 00:18:35.266 21:40:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:35.266 21:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:35.266 21:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 370347 00:18:35.266 21:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:35.266 21:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:35.266 21:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 370347' 00:18:35.266 killing process with pid 370347 00:18:35.266 21:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 370347 00:18:35.266 [2024-07-15 21:40:26.025151] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:35.266 21:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 370347 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.nyo2u5M258 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.nyo2u5M258 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=373158 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 373158 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 373158 ']' 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:35.525 21:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.783 [2024-07-15 21:40:26.338333] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:18:35.783 [2024-07-15 21:40:26.338431] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.783 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.783 [2024-07-15 21:40:26.398368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.783 [2024-07-15 21:40:26.502730] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.783 [2024-07-15 21:40:26.502783] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.783 [2024-07-15 21:40:26.502809] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.783 [2024-07-15 21:40:26.502821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.783 [2024-07-15 21:40:26.502838] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.783 [2024-07-15 21:40:26.502864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.040 21:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:36.040 21:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:36.040 21:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:36.040 21:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:36.040 21:40:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.040 21:40:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.040 21:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.nyo2u5M258 00:18:36.040 21:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nyo2u5M258 00:18:36.040 21:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:36.297 [2024-07-15 21:40:26.906506] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.297 21:40:26 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:36.554 21:40:27 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:36.810 [2024-07-15 21:40:27.492047] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:36.810 [2024-07-15 21:40:27.492272] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:36.810 21:40:27 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:37.067 malloc0 00:18:37.067 21:40:27 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:37.323 21:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nyo2u5M258 00:18:37.887 [2024-07-15 21:40:28.383582] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:37.887 21:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nyo2u5M258 00:18:37.887 21:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:37.887 21:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:37.887 21:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:37.887 21:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nyo2u5M258' 00:18:37.887 21:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:37.887 21:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=373376 00:18:37.887 21:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:37.887 21:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 373376 /var/tmp/bdevperf.sock 00:18:37.887 21:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 373376 ']' 00:18:37.887 21:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.887 21:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:37.887 21:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:37.887 21:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.887 21:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:37.887 21:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.887 [2024-07-15 21:40:28.451752] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:18:37.887 [2024-07-15 21:40:28.451865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid373376 ] 00:18:37.887 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.887 [2024-07-15 21:40:28.507633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.887 [2024-07-15 21:40:28.604095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.143 21:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:38.143 21:40:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:38.143 21:40:28 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nyo2u5M258 00:18:38.399 [2024-07-15 21:40:28.977592] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.399 [2024-07-15 21:40:28.977700] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:38.399 TLSTESTn1 00:18:38.399 21:40:29 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:38.399 Running I/O for 10 seconds... 00:18:50.590 00:18:50.590 Latency(us) 00:18:50.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.590 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:50.590 Verification LBA range: start 0x0 length 0x2000 00:18:50.590 TLSTESTn1 : 10.04 3697.66 14.44 0.00 0.00 34535.28 6893.42 38059.43 00:18:50.590 =================================================================================================================== 00:18:50.590 Total : 3697.66 14.44 0.00 0.00 34535.28 6893.42 38059.43 00:18:50.590 0 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 373376 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 373376 ']' 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 373376 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 373376 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 373376' 00:18:50.590 killing process with pid 373376 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 373376 00:18:50.590 Received shutdown signal, test time was about 10.000000 seconds 00:18:50.590 00:18:50.590 Latency(us) 00:18:50.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.590 =================================================================================================================== 00:18:50.590 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:50.590 [2024-07-15 21:40:39.294597] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 373376 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.nyo2u5M258 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nyo2u5M258 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nyo2u5M258 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.nyo2u5M258 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.nyo2u5M258' 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=374378 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 374378 /var/tmp/bdevperf.sock 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 374378 ']' 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:50.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.590 [2024-07-15 21:40:39.542251] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:18:50.590 [2024-07-15 21:40:39.542352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid374378 ] 00:18:50.590 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.590 [2024-07-15 21:40:39.598847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.590 [2024-07-15 21:40:39.697622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:50.590 21:40:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nyo2u5M258 00:18:50.590 [2024-07-15 21:40:40.072874] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:50.590 [2024-07-15 21:40:40.072949] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:50.590 [2024-07-15 21:40:40.072964] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.nyo2u5M258 00:18:50.590 request: 00:18:50.590 { 00:18:50.590 "name": "TLSTEST", 00:18:50.590 "trtype": "tcp", 00:18:50.590 "traddr": "10.0.0.2", 00:18:50.590 "adrfam": "ipv4", 00:18:50.590 "trsvcid": "4420", 00:18:50.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:50.590 "prchk_reftag": false, 00:18:50.590 "prchk_guard": false, 00:18:50.590 "hdgst": false, 00:18:50.590 "ddgst": false, 00:18:50.590 "psk": "/tmp/tmp.nyo2u5M258", 00:18:50.590 "method": "bdev_nvme_attach_controller", 00:18:50.590 "req_id": 1 00:18:50.590 } 00:18:50.590 Got JSON-RPC error response 00:18:50.590 response: 00:18:50.590 { 00:18:50.590 "code": -1, 00:18:50.590 "message": "Operation not permitted" 00:18:50.590 } 00:18:50.590 21:40:40 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 374378 00:18:50.590 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 374378 ']' 00:18:50.590 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 374378 00:18:50.590 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:50.590 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:50.590 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 374378 00:18:50.590 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 374378' 00:18:50.591 killing process with pid 374378 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 374378 00:18:50.591 Received shutdown signal, test time was about 10.000000 seconds 00:18:50.591 00:18:50.591 Latency(us) 00:18:50.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.591 =================================================================================================================== 00:18:50.591 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 374378 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 373158 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 373158 ']' 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 373158 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 373158 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 373158' 00:18:50.591 killing process with pid 373158 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 373158 00:18:50.591 [2024-07-15 21:40:40.332580] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 373158 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=374487 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 374487 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 374487 ']' 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.591 [2024-07-15 21:40:40.588161] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:18:50.591 [2024-07-15 21:40:40.588279] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.591 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.591 [2024-07-15 21:40:40.648223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.591 [2024-07-15 21:40:40.750444] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.591 [2024-07-15 21:40:40.750497] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.591 [2024-07-15 21:40:40.750522] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.591 [2024-07-15 21:40:40.750534] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.591 [2024-07-15 21:40:40.750545] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.591 [2024-07-15 21:40:40.750572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.nyo2u5M258 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.nyo2u5M258 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.nyo2u5M258 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nyo2u5M258 00:18:50.591 21:40:40 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:50.591 [2024-07-15 21:40:41.153023] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.591 21:40:41 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:50.849 21:40:41 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:50.849 [2024-07-15 21:40:41.626278] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:50.849 [2024-07-15 21:40:41.626459] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.106 21:40:41 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:51.106 malloc0 00:18:51.106 21:40:41 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:51.364 21:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nyo2u5M258 00:18:51.621 [2024-07-15 21:40:42.353131] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:51.621 [2024-07-15 21:40:42.353191] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:51.621 [2024-07-15 21:40:42.353225] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:51.621 request: 00:18:51.621 { 00:18:51.621 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.621 "host": "nqn.2016-06.io.spdk:host1", 00:18:51.621 "psk": "/tmp/tmp.nyo2u5M258", 00:18:51.621 "method": "nvmf_subsystem_add_host", 00:18:51.621 "req_id": 1 00:18:51.621 } 00:18:51.621 Got JSON-RPC error response 00:18:51.621 response: 00:18:51.621 { 00:18:51.621 "code": -32603, 00:18:51.621 "message": "Internal error" 00:18:51.621 } 00:18:51.621 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:51.621 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:51.621 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:51.621 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:51.621 21:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 374487 00:18:51.621 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 374487 ']' 00:18:51.621 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 374487 00:18:51.621 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:51.622 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:51.622 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 374487 00:18:51.622 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:51.622 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:51.622 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 374487' 00:18:51.622 killing process with pid 374487 00:18:51.622 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 374487 00:18:51.622 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 374487 00:18:51.880 21:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.nyo2u5M258 00:18:51.880 21:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:51.880 21:40:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:51.880 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:51.880 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.880 21:40:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=374719 00:18:51.880 21:40:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:51.880 21:40:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 374719 00:18:51.880 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 374719 ']' 00:18:51.880 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.880 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:51.880 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.880 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:51.880 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.880 [2024-07-15 21:40:42.654777] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:18:51.880 [2024-07-15 21:40:42.654878] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.139 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.139 [2024-07-15 21:40:42.715302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.139 [2024-07-15 21:40:42.821265] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.139 [2024-07-15 21:40:42.821315] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.139 [2024-07-15 21:40:42.821329] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.139 [2024-07-15 21:40:42.821341] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.139 [2024-07-15 21:40:42.821351] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.139 [2024-07-15 21:40:42.821383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.139 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:52.139 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:52.139 21:40:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:52.139 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:52.139 21:40:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.410 21:40:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.410 21:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.nyo2u5M258 00:18:52.410 21:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nyo2u5M258 00:18:52.410 21:40:42 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:52.667 [2024-07-15 21:40:43.221338] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.667 21:40:43 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:52.923 21:40:43 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:53.179 [2024-07-15 21:40:43.818883] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:53.179 [2024-07-15 21:40:43.819057] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.179 21:40:43 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:53.436 malloc0 00:18:53.436 21:40:44 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:53.693 21:40:44 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nyo2u5M258 00:18:53.950 [2024-07-15 21:40:44.710632] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:53.950 21:40:44 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=374937 00:18:53.950 21:40:44 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:53.950 21:40:44 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:53.950 21:40:44 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 374937 /var/tmp/bdevperf.sock 00:18:53.950 21:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 374937 ']' 00:18:53.950 21:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.950 21:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:53.950 21:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.950 21:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:53.950 21:40:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.206 [2024-07-15 21:40:44.775085] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:18:54.207 [2024-07-15 21:40:44.775195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid374937 ] 00:18:54.207 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.207 [2024-07-15 21:40:44.832023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.207 [2024-07-15 21:40:44.938082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.463 21:40:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:54.463 21:40:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:54.463 21:40:45 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nyo2u5M258 00:18:54.721 [2024-07-15 21:40:45.307010] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:54.721 [2024-07-15 21:40:45.307127] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:54.721 TLSTESTn1 00:18:54.721 21:40:45 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:54.979 21:40:45 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:54.979 "subsystems": [ 00:18:54.979 { 00:18:54.979 "subsystem": "keyring", 00:18:54.979 "config": [] 00:18:54.979 }, 00:18:54.979 { 00:18:54.979 "subsystem": "iobuf", 00:18:54.979 "config": [ 00:18:54.979 { 00:18:54.979 "method": "iobuf_set_options", 00:18:54.979 "params": { 00:18:54.979 "small_pool_count": 8192, 00:18:54.979 "large_pool_count": 1024, 00:18:54.979 "small_bufsize": 8192, 00:18:54.979 "large_bufsize": 135168 00:18:54.979 } 00:18:54.979 } 00:18:54.979 ] 00:18:54.979 }, 00:18:54.979 { 00:18:54.979 "subsystem": "sock", 00:18:54.979 "config": [ 00:18:54.979 { 00:18:54.979 "method": "sock_set_default_impl", 00:18:54.979 "params": { 00:18:54.979 "impl_name": "posix" 00:18:54.979 } 00:18:54.979 }, 00:18:54.979 { 00:18:54.979 "method": "sock_impl_set_options", 00:18:54.979 "params": { 00:18:54.979 "impl_name": "ssl", 00:18:54.979 "recv_buf_size": 4096, 00:18:54.979 "send_buf_size": 4096, 00:18:54.979 "enable_recv_pipe": true, 00:18:54.979 "enable_quickack": false, 00:18:54.979 "enable_placement_id": 0, 00:18:54.979 "enable_zerocopy_send_server": true, 00:18:54.979 "enable_zerocopy_send_client": false, 00:18:54.979 "zerocopy_threshold": 0, 00:18:54.979 "tls_version": 0, 00:18:54.979 "enable_ktls": false 00:18:54.979 } 00:18:54.979 }, 00:18:54.979 { 00:18:54.979 "method": "sock_impl_set_options", 00:18:54.979 "params": { 00:18:54.979 "impl_name": "posix", 00:18:54.979 "recv_buf_size": 2097152, 00:18:54.979 "send_buf_size": 2097152, 00:18:54.979 "enable_recv_pipe": true, 00:18:54.979 "enable_quickack": false, 00:18:54.979 "enable_placement_id": 0, 00:18:54.979 "enable_zerocopy_send_server": true, 00:18:54.979 "enable_zerocopy_send_client": false, 00:18:54.979 "zerocopy_threshold": 0, 00:18:54.979 "tls_version": 0, 00:18:54.979 "enable_ktls": false 00:18:54.979 } 00:18:54.979 } 00:18:54.979 ] 00:18:54.979 }, 00:18:54.979 { 00:18:54.979 "subsystem": "vmd", 00:18:54.979 "config": [] 00:18:54.979 }, 00:18:54.979 { 00:18:54.979 "subsystem": "accel", 00:18:54.979 "config": [ 00:18:54.979 { 00:18:54.979 "method": "accel_set_options", 00:18:54.979 "params": { 00:18:54.979 "small_cache_size": 128, 00:18:54.979 "large_cache_size": 16, 00:18:54.979 "task_count": 2048, 00:18:54.979 "sequence_count": 2048, 00:18:54.979 "buf_count": 2048 00:18:54.979 } 00:18:54.979 } 00:18:54.979 ] 00:18:54.979 }, 00:18:54.979 { 00:18:54.979 "subsystem": "bdev", 00:18:54.979 "config": [ 00:18:54.979 { 00:18:54.979 "method": "bdev_set_options", 00:18:54.979 "params": { 00:18:54.979 "bdev_io_pool_size": 65535, 00:18:54.979 "bdev_io_cache_size": 256, 00:18:54.979 "bdev_auto_examine": true, 00:18:54.979 "iobuf_small_cache_size": 128, 00:18:54.979 "iobuf_large_cache_size": 16 00:18:54.979 } 00:18:54.979 }, 00:18:54.979 { 00:18:54.979 "method": "bdev_raid_set_options", 00:18:54.979 "params": { 00:18:54.979 "process_window_size_kb": 1024 00:18:54.979 } 00:18:54.979 }, 00:18:54.979 { 00:18:54.979 "method": "bdev_iscsi_set_options", 00:18:54.979 "params": { 00:18:54.979 "timeout_sec": 30 00:18:54.979 } 00:18:54.979 }, 00:18:54.979 { 00:18:54.979 "method": "bdev_nvme_set_options", 00:18:54.979 "params": { 00:18:54.979 "action_on_timeout": "none", 00:18:54.979 "timeout_us": 0, 00:18:54.979 "timeout_admin_us": 0, 00:18:54.979 "keep_alive_timeout_ms": 10000, 00:18:54.979 "arbitration_burst": 0, 00:18:54.979 "low_priority_weight": 0, 00:18:54.979 "medium_priority_weight": 0, 00:18:54.979 "high_priority_weight": 0, 00:18:54.979 "nvme_adminq_poll_period_us": 10000, 00:18:54.979 "nvme_ioq_poll_period_us": 0, 00:18:54.979 "io_queue_requests": 0, 00:18:54.979 "delay_cmd_submit": true, 00:18:54.979 "transport_retry_count": 4, 00:18:54.979 "bdev_retry_count": 3, 00:18:54.979 "transport_ack_timeout": 0, 00:18:54.979 "ctrlr_loss_timeout_sec": 0, 00:18:54.979 "reconnect_delay_sec": 0, 00:18:54.979 "fast_io_fail_timeout_sec": 0, 00:18:54.979 "disable_auto_failback": false, 00:18:54.979 "generate_uuids": false, 00:18:54.979 "transport_tos": 0, 00:18:54.979 "nvme_error_stat": false, 00:18:54.979 "rdma_srq_size": 0, 00:18:54.979 "io_path_stat": false, 00:18:54.979 "allow_accel_sequence": false, 00:18:54.979 "rdma_max_cq_size": 0, 00:18:54.979 "rdma_cm_event_timeout_ms": 0, 00:18:54.979 "dhchap_digests": [ 00:18:54.979 "sha256", 00:18:54.979 "sha384", 00:18:54.979 "sha512" 00:18:54.979 ], 00:18:54.979 "dhchap_dhgroups": [ 00:18:54.979 "null", 00:18:54.979 "ffdhe2048", 00:18:54.979 "ffdhe3072", 00:18:54.979 "ffdhe4096", 00:18:54.979 "ffdhe6144", 00:18:54.979 "ffdhe8192" 00:18:54.979 ] 00:18:54.979 } 00:18:54.979 }, 00:18:54.979 { 00:18:54.979 "method": "bdev_nvme_set_hotplug", 00:18:54.979 "params": { 00:18:54.979 "period_us": 100000, 00:18:54.979 "enable": false 00:18:54.979 } 00:18:54.979 }, 00:18:54.979 { 00:18:54.979 "method": "bdev_malloc_create", 00:18:54.980 "params": { 00:18:54.980 "name": "malloc0", 00:18:54.980 "num_blocks": 8192, 00:18:54.980 "block_size": 4096, 00:18:54.980 "physical_block_size": 4096, 00:18:54.980 "uuid": "a5f32deb-d3f4-4608-a287-64d5e70b7cf4", 00:18:54.980 "optimal_io_boundary": 0 00:18:54.980 } 00:18:54.980 }, 00:18:54.980 { 00:18:54.980 "method": "bdev_wait_for_examine" 00:18:54.980 } 00:18:54.980 ] 00:18:54.980 }, 00:18:54.980 { 00:18:54.980 "subsystem": "nbd", 00:18:54.980 "config": [] 00:18:54.980 }, 00:18:54.980 { 00:18:54.980 "subsystem": "scheduler", 00:18:54.980 "config": [ 00:18:54.980 { 00:18:54.980 "method": "framework_set_scheduler", 00:18:54.980 "params": { 00:18:54.980 "name": "static" 00:18:54.980 } 00:18:54.980 } 00:18:54.980 ] 00:18:54.980 }, 00:18:54.980 { 00:18:54.980 "subsystem": "nvmf", 00:18:54.980 "config": [ 00:18:54.980 { 00:18:54.980 "method": "nvmf_set_config", 00:18:54.980 "params": { 00:18:54.980 "discovery_filter": "match_any", 00:18:54.980 "admin_cmd_passthru": { 00:18:54.980 "identify_ctrlr": false 00:18:54.980 } 00:18:54.980 } 00:18:54.980 }, 00:18:54.980 { 00:18:54.980 "method": "nvmf_set_max_subsystems", 00:18:54.980 "params": { 00:18:54.980 "max_subsystems": 1024 00:18:54.980 } 00:18:54.980 }, 00:18:54.980 { 00:18:54.980 "method": "nvmf_set_crdt", 00:18:54.980 "params": { 00:18:54.980 "crdt1": 0, 00:18:54.980 "crdt2": 0, 00:18:54.980 "crdt3": 0 00:18:54.980 } 00:18:54.980 }, 00:18:54.980 { 00:18:54.980 "method": "nvmf_create_transport", 00:18:54.980 "params": { 00:18:54.980 "trtype": "TCP", 00:18:54.980 "max_queue_depth": 128, 00:18:54.980 "max_io_qpairs_per_ctrlr": 127, 00:18:54.980 "in_capsule_data_size": 4096, 00:18:54.980 "max_io_size": 131072, 00:18:54.980 "io_unit_size": 131072, 00:18:54.980 "max_aq_depth": 128, 00:18:54.980 "num_shared_buffers": 511, 00:18:54.980 "buf_cache_size": 4294967295, 00:18:54.980 "dif_insert_or_strip": false, 00:18:54.980 "zcopy": false, 00:18:54.980 "c2h_success": false, 00:18:54.980 "sock_priority": 0, 00:18:54.980 "abort_timeout_sec": 1, 00:18:54.980 "ack_timeout": 0, 00:18:54.980 "data_wr_pool_size": 0 00:18:54.980 } 00:18:54.980 }, 00:18:54.980 { 00:18:54.980 "method": "nvmf_create_subsystem", 00:18:54.980 "params": { 00:18:54.980 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.980 "allow_any_host": false, 00:18:54.980 "serial_number": "SPDK00000000000001", 00:18:54.980 "model_number": "SPDK bdev Controller", 00:18:54.980 "max_namespaces": 10, 00:18:54.980 "min_cntlid": 1, 00:18:54.980 "max_cntlid": 65519, 00:18:54.980 "ana_reporting": false 00:18:54.980 } 00:18:54.980 }, 00:18:54.980 { 00:18:54.980 "method": "nvmf_subsystem_add_host", 00:18:54.980 "params": { 00:18:54.980 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.980 "host": "nqn.2016-06.io.spdk:host1", 00:18:54.980 "psk": "/tmp/tmp.nyo2u5M258" 00:18:54.980 } 00:18:54.980 }, 00:18:54.980 { 00:18:54.980 "method": "nvmf_subsystem_add_ns", 00:18:54.980 "params": { 00:18:54.980 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.980 "namespace": { 00:18:54.980 "nsid": 1, 00:18:54.980 "bdev_name": "malloc0", 00:18:54.980 "nguid": "A5F32DEBD3F44608A28764D5E70B7CF4", 00:18:54.980 "uuid": "a5f32deb-d3f4-4608-a287-64d5e70b7cf4", 00:18:54.980 "no_auto_visible": false 00:18:54.980 } 00:18:54.980 } 00:18:54.980 }, 00:18:54.980 { 00:18:54.980 "method": "nvmf_subsystem_add_listener", 00:18:54.980 "params": { 00:18:54.980 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.980 "listen_address": { 00:18:54.980 "trtype": "TCP", 00:18:54.980 "adrfam": "IPv4", 00:18:54.980 "traddr": "10.0.0.2", 00:18:54.980 "trsvcid": "4420" 00:18:54.980 }, 00:18:54.980 "secure_channel": true 00:18:54.980 } 00:18:54.980 } 00:18:54.980 ] 00:18:54.980 } 00:18:54.980 ] 00:18:54.980 }' 00:18:54.980 21:40:45 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:55.601 21:40:46 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:55.601 "subsystems": [ 00:18:55.601 { 00:18:55.601 "subsystem": "keyring", 00:18:55.601 "config": [] 00:18:55.601 }, 00:18:55.601 { 00:18:55.601 "subsystem": "iobuf", 00:18:55.601 "config": [ 00:18:55.601 { 00:18:55.601 "method": "iobuf_set_options", 00:18:55.601 "params": { 00:18:55.601 "small_pool_count": 8192, 00:18:55.601 "large_pool_count": 1024, 00:18:55.601 "small_bufsize": 8192, 00:18:55.601 "large_bufsize": 135168 00:18:55.601 } 00:18:55.601 } 00:18:55.601 ] 00:18:55.601 }, 00:18:55.601 { 00:18:55.601 "subsystem": "sock", 00:18:55.601 "config": [ 00:18:55.601 { 00:18:55.601 "method": "sock_set_default_impl", 00:18:55.601 "params": { 00:18:55.601 "impl_name": "posix" 00:18:55.601 } 00:18:55.601 }, 00:18:55.601 { 00:18:55.601 "method": "sock_impl_set_options", 00:18:55.601 "params": { 00:18:55.601 "impl_name": "ssl", 00:18:55.601 "recv_buf_size": 4096, 00:18:55.601 "send_buf_size": 4096, 00:18:55.601 "enable_recv_pipe": true, 00:18:55.601 "enable_quickack": false, 00:18:55.601 "enable_placement_id": 0, 00:18:55.601 "enable_zerocopy_send_server": true, 00:18:55.601 "enable_zerocopy_send_client": false, 00:18:55.601 "zerocopy_threshold": 0, 00:18:55.601 "tls_version": 0, 00:18:55.601 "enable_ktls": false 00:18:55.601 } 00:18:55.601 }, 00:18:55.601 { 00:18:55.601 "method": "sock_impl_set_options", 00:18:55.601 "params": { 00:18:55.601 "impl_name": "posix", 00:18:55.601 "recv_buf_size": 2097152, 00:18:55.601 "send_buf_size": 2097152, 00:18:55.601 "enable_recv_pipe": true, 00:18:55.601 "enable_quickack": false, 00:18:55.601 "enable_placement_id": 0, 00:18:55.601 "enable_zerocopy_send_server": true, 00:18:55.601 "enable_zerocopy_send_client": false, 00:18:55.601 "zerocopy_threshold": 0, 00:18:55.601 "tls_version": 0, 00:18:55.601 "enable_ktls": false 00:18:55.601 } 00:18:55.601 } 00:18:55.601 ] 00:18:55.601 }, 00:18:55.601 { 00:18:55.601 "subsystem": "vmd", 00:18:55.601 "config": [] 00:18:55.601 }, 00:18:55.601 { 00:18:55.601 "subsystem": "accel", 00:18:55.601 "config": [ 00:18:55.601 { 00:18:55.601 "method": "accel_set_options", 00:18:55.601 "params": { 00:18:55.601 "small_cache_size": 128, 00:18:55.601 "large_cache_size": 16, 00:18:55.601 "task_count": 2048, 00:18:55.601 "sequence_count": 2048, 00:18:55.601 "buf_count": 2048 00:18:55.601 } 00:18:55.601 } 00:18:55.601 ] 00:18:55.601 }, 00:18:55.601 { 00:18:55.601 "subsystem": "bdev", 00:18:55.601 "config": [ 00:18:55.601 { 00:18:55.601 "method": "bdev_set_options", 00:18:55.601 "params": { 00:18:55.602 "bdev_io_pool_size": 65535, 00:18:55.602 "bdev_io_cache_size": 256, 00:18:55.602 "bdev_auto_examine": true, 00:18:55.602 "iobuf_small_cache_size": 128, 00:18:55.602 "iobuf_large_cache_size": 16 00:18:55.602 } 00:18:55.602 }, 00:18:55.602 { 00:18:55.602 "method": "bdev_raid_set_options", 00:18:55.602 "params": { 00:18:55.602 "process_window_size_kb": 1024 00:18:55.602 } 00:18:55.602 }, 00:18:55.602 { 00:18:55.602 "method": "bdev_iscsi_set_options", 00:18:55.602 "params": { 00:18:55.602 "timeout_sec": 30 00:18:55.602 } 00:18:55.602 }, 00:18:55.602 { 00:18:55.602 "method": "bdev_nvme_set_options", 00:18:55.602 "params": { 00:18:55.602 "action_on_timeout": "none", 00:18:55.602 "timeout_us": 0, 00:18:55.602 "timeout_admin_us": 0, 00:18:55.602 "keep_alive_timeout_ms": 10000, 00:18:55.602 "arbitration_burst": 0, 00:18:55.602 "low_priority_weight": 0, 00:18:55.602 "medium_priority_weight": 0, 00:18:55.602 "high_priority_weight": 0, 00:18:55.602 "nvme_adminq_poll_period_us": 10000, 00:18:55.602 "nvme_ioq_poll_period_us": 0, 00:18:55.602 "io_queue_requests": 512, 00:18:55.602 "delay_cmd_submit": true, 00:18:55.602 "transport_retry_count": 4, 00:18:55.602 "bdev_retry_count": 3, 00:18:55.602 "transport_ack_timeout": 0, 00:18:55.602 "ctrlr_loss_timeout_sec": 0, 00:18:55.602 "reconnect_delay_sec": 0, 00:18:55.602 "fast_io_fail_timeout_sec": 0, 00:18:55.602 "disable_auto_failback": false, 00:18:55.602 "generate_uuids": false, 00:18:55.602 "transport_tos": 0, 00:18:55.602 "nvme_error_stat": false, 00:18:55.602 "rdma_srq_size": 0, 00:18:55.602 "io_path_stat": false, 00:18:55.602 "allow_accel_sequence": false, 00:18:55.602 "rdma_max_cq_size": 0, 00:18:55.602 "rdma_cm_event_timeout_ms": 0, 00:18:55.602 "dhchap_digests": [ 00:18:55.602 "sha256", 00:18:55.602 "sha384", 00:18:55.602 "sha512" 00:18:55.602 ], 00:18:55.602 "dhchap_dhgroups": [ 00:18:55.602 "null", 00:18:55.602 "ffdhe2048", 00:18:55.602 "ffdhe3072", 00:18:55.602 "ffdhe4096", 00:18:55.602 "ffdhe6144", 00:18:55.602 "ffdhe8192" 00:18:55.602 ] 00:18:55.602 } 00:18:55.602 }, 00:18:55.602 { 00:18:55.602 "method": "bdev_nvme_attach_controller", 00:18:55.602 "params": { 00:18:55.602 "name": "TLSTEST", 00:18:55.602 "trtype": "TCP", 00:18:55.602 "adrfam": "IPv4", 00:18:55.602 "traddr": "10.0.0.2", 00:18:55.602 "trsvcid": "4420", 00:18:55.602 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.602 "prchk_reftag": false, 00:18:55.602 "prchk_guard": false, 00:18:55.602 "ctrlr_loss_timeout_sec": 0, 00:18:55.602 "reconnect_delay_sec": 0, 00:18:55.602 "fast_io_fail_timeout_sec": 0, 00:18:55.602 "psk": "/tmp/tmp.nyo2u5M258", 00:18:55.602 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:55.602 "hdgst": false, 00:18:55.602 "ddgst": false 00:18:55.602 } 00:18:55.602 }, 00:18:55.602 { 00:18:55.602 "method": "bdev_nvme_set_hotplug", 00:18:55.602 "params": { 00:18:55.602 "period_us": 100000, 00:18:55.602 "enable": false 00:18:55.602 } 00:18:55.602 }, 00:18:55.602 { 00:18:55.602 "method": "bdev_wait_for_examine" 00:18:55.602 } 00:18:55.602 ] 00:18:55.602 }, 00:18:55.602 { 00:18:55.602 "subsystem": "nbd", 00:18:55.602 "config": [] 00:18:55.602 } 00:18:55.602 ] 00:18:55.602 }' 00:18:55.602 21:40:46 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 374937 00:18:55.602 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 374937 ']' 00:18:55.602 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 374937 00:18:55.602 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:55.602 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:55.602 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 374937 00:18:55.602 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:55.602 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:55.602 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 374937' 00:18:55.602 killing process with pid 374937 00:18:55.602 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 374937 00:18:55.602 Received shutdown signal, test time was about 10.000000 seconds 00:18:55.602 00:18:55.602 Latency(us) 00:18:55.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.602 =================================================================================================================== 00:18:55.602 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:55.602 [2024-07-15 21:40:46.146125] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:55.602 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 374937 00:18:55.602 21:40:46 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 374719 00:18:55.602 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 374719 ']' 00:18:55.602 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 374719 00:18:55.602 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:55.602 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:55.602 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 374719 00:18:55.602 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:55.602 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:55.602 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 374719' 00:18:55.602 killing process with pid 374719 00:18:55.602 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 374719 00:18:55.602 [2024-07-15 21:40:46.359153] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:55.602 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 374719 00:18:55.861 21:40:46 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:55.861 21:40:46 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:55.861 "subsystems": [ 00:18:55.861 { 00:18:55.861 "subsystem": "keyring", 00:18:55.861 "config": [] 00:18:55.861 }, 00:18:55.861 { 00:18:55.861 "subsystem": "iobuf", 00:18:55.861 "config": [ 00:18:55.861 { 00:18:55.861 "method": "iobuf_set_options", 00:18:55.861 "params": { 00:18:55.861 "small_pool_count": 8192, 00:18:55.861 "large_pool_count": 1024, 00:18:55.861 "small_bufsize": 8192, 00:18:55.861 "large_bufsize": 135168 00:18:55.861 } 00:18:55.861 } 00:18:55.861 ] 00:18:55.861 }, 00:18:55.861 { 00:18:55.861 "subsystem": "sock", 00:18:55.861 "config": [ 00:18:55.861 { 00:18:55.861 "method": "sock_set_default_impl", 00:18:55.861 "params": { 00:18:55.861 "impl_name": "posix" 00:18:55.861 } 00:18:55.861 }, 00:18:55.861 { 00:18:55.861 "method": "sock_impl_set_options", 00:18:55.861 "params": { 00:18:55.861 "impl_name": "ssl", 00:18:55.861 "recv_buf_size": 4096, 00:18:55.861 "send_buf_size": 4096, 00:18:55.861 "enable_recv_pipe": true, 00:18:55.861 "enable_quickack": false, 00:18:55.861 "enable_placement_id": 0, 00:18:55.861 "enable_zerocopy_send_server": true, 00:18:55.861 "enable_zerocopy_send_client": false, 00:18:55.861 "zerocopy_threshold": 0, 00:18:55.861 "tls_version": 0, 00:18:55.861 "enable_ktls": false 00:18:55.861 } 00:18:55.861 }, 00:18:55.861 { 00:18:55.861 "method": "sock_impl_set_options", 00:18:55.861 "params": { 00:18:55.861 "impl_name": "posix", 00:18:55.861 "recv_buf_size": 2097152, 00:18:55.861 "send_buf_size": 2097152, 00:18:55.861 "enable_recv_pipe": true, 00:18:55.861 "enable_quickack": false, 00:18:55.861 "enable_placement_id": 0, 00:18:55.861 "enable_zerocopy_send_server": true, 00:18:55.861 "enable_zerocopy_send_client": false, 00:18:55.861 "zerocopy_threshold": 0, 00:18:55.861 "tls_version": 0, 00:18:55.861 "enable_ktls": false 00:18:55.861 } 00:18:55.861 } 00:18:55.861 ] 00:18:55.861 }, 00:18:55.861 { 00:18:55.861 "subsystem": "vmd", 00:18:55.861 "config": [] 00:18:55.861 }, 00:18:55.861 { 00:18:55.861 "subsystem": "accel", 00:18:55.861 "config": [ 00:18:55.861 { 00:18:55.861 "method": "accel_set_options", 00:18:55.861 "params": { 00:18:55.861 "small_cache_size": 128, 00:18:55.861 "large_cache_size": 16, 00:18:55.861 "task_count": 2048, 00:18:55.861 "sequence_count": 2048, 00:18:55.861 "buf_count": 2048 00:18:55.861 } 00:18:55.861 } 00:18:55.861 ] 00:18:55.861 }, 00:18:55.861 { 00:18:55.861 "subsystem": "bdev", 00:18:55.861 "config": [ 00:18:55.861 { 00:18:55.861 "method": "bdev_set_options", 00:18:55.861 "params": { 00:18:55.861 "bdev_io_pool_size": 65535, 00:18:55.861 "bdev_io_cache_size": 256, 00:18:55.861 "bdev_auto_examine": true, 00:18:55.861 "iobuf_small_cache_size": 128, 00:18:55.861 "iobuf_large_cache_size": 16 00:18:55.861 } 00:18:55.861 }, 00:18:55.861 { 00:18:55.861 "method": "bdev_raid_set_options", 00:18:55.861 "params": { 00:18:55.861 "process_window_size_kb": 1024 00:18:55.861 } 00:18:55.861 }, 00:18:55.861 { 00:18:55.861 "method": "bdev_iscsi_set_options", 00:18:55.861 "params": { 00:18:55.861 "timeout_sec": 30 00:18:55.861 } 00:18:55.861 }, 00:18:55.861 { 00:18:55.861 "method": "bdev_nvme_set_options", 00:18:55.861 "params": { 00:18:55.861 "action_on_timeout": "none", 00:18:55.861 "timeout_us": 0, 00:18:55.861 "timeout_admin_us": 0, 00:18:55.861 "keep_alive_timeout_ms": 10000, 00:18:55.861 "arbitration_burst": 0, 00:18:55.861 "low_priority_weight": 0, 00:18:55.861 "medium_priority_weight": 0, 00:18:55.861 "high_priority_weight": 0, 00:18:55.861 "nvme_adminq_poll_period_us": 10000, 00:18:55.861 "nvme_ioq_poll_period_us": 0, 00:18:55.861 "io_queue_requests": 0, 00:18:55.861 "delay_cmd_submit": true, 00:18:55.861 "transport_retry_count": 4, 00:18:55.861 "bdev_retry_count": 3, 00:18:55.861 "transport_ack_timeout": 0, 00:18:55.861 "ctrlr_loss_timeout_sec": 0, 00:18:55.861 "reconnect_delay_sec": 0, 00:18:55.861 "fast_io_fail_timeout_sec": 0, 00:18:55.861 "disable_auto_failback": false, 00:18:55.861 "generate_uuids": false, 00:18:55.861 "transport_tos": 0, 00:18:55.861 "nvme_error_stat": false, 00:18:55.861 "rdma_srq_size": 0, 00:18:55.861 "io_path_stat": false, 00:18:55.861 "allow_accel_sequence": false, 00:18:55.861 "rdma_max_cq_size": 0, 00:18:55.861 "rdma_cm_event_timeout_ms": 0, 00:18:55.861 "dhchap_digests": [ 00:18:55.861 "sha256", 00:18:55.861 "sha384", 00:18:55.861 "sha512" 00:18:55.861 ], 00:18:55.861 "dhchap_dhgroups": [ 00:18:55.861 "null", 00:18:55.861 "ffdhe2048", 00:18:55.861 "ffdhe3072", 00:18:55.861 "ffdhe4096", 00:18:55.861 "ffdhe6144", 00:18:55.861 "ffdhe8192" 00:18:55.861 ] 00:18:55.861 } 00:18:55.861 }, 00:18:55.861 { 00:18:55.861 "method": "bdev_nvme_set_hotplug", 00:18:55.861 "params": { 00:18:55.861 "period_us": 100000, 00:18:55.861 "enable": false 00:18:55.861 } 00:18:55.861 }, 00:18:55.861 { 00:18:55.861 "method": "bdev_malloc_create", 00:18:55.861 "params": { 00:18:55.861 "name": "malloc0", 00:18:55.861 "num_blocks": 8192, 00:18:55.861 "block_size": 4096, 00:18:55.861 "physical_block_size": 4096, 00:18:55.861 "uuid": "a5f32deb-d3f4-4608-a287-64d5e70b7cf4", 00:18:55.861 "optimal_io_boundary": 0 00:18:55.861 } 00:18:55.861 }, 00:18:55.861 { 00:18:55.861 "method": "bdev_wait_for_examine" 00:18:55.861 } 00:18:55.861 ] 00:18:55.861 }, 00:18:55.861 { 00:18:55.861 "subsystem": "nbd", 00:18:55.861 "config": [] 00:18:55.861 }, 00:18:55.861 { 00:18:55.861 "subsystem": "scheduler", 00:18:55.861 "config": [ 00:18:55.861 { 00:18:55.861 "method": "framework_set_scheduler", 00:18:55.861 "params": { 00:18:55.861 "name": "static" 00:18:55.861 } 00:18:55.861 } 00:18:55.861 ] 00:18:55.861 }, 00:18:55.861 { 00:18:55.861 "subsystem": "nvmf", 00:18:55.861 "config": [ 00:18:55.861 { 00:18:55.861 "method": "nvmf_set_config", 00:18:55.861 "params": { 00:18:55.861 "discovery_filter": "match_any", 00:18:55.861 "admin_cmd_passthru": { 00:18:55.861 "identify_ctrlr": false 00:18:55.861 } 00:18:55.861 } 00:18:55.861 }, 00:18:55.861 { 00:18:55.861 "method": "nvmf_set_max_subsystems", 00:18:55.861 "params": { 00:18:55.861 "max_subsystems": 1024 00:18:55.861 } 00:18:55.861 }, 00:18:55.861 { 00:18:55.861 "method": "nvmf_set_crdt", 00:18:55.861 "params": { 00:18:55.861 "crdt1": 0, 00:18:55.861 "crdt2": 0, 00:18:55.861 "crdt3": 0 00:18:55.861 } 00:18:55.861 }, 00:18:55.861 { 00:18:55.861 "method": "nvmf_create_transport", 00:18:55.861 "params": { 00:18:55.861 "trtype": "TCP", 00:18:55.861 "max_queue_depth": 128, 00:18:55.861 "max_io_qpairs_per_ctrlr": 127, 00:18:55.861 "in_capsule_data_size": 4096, 00:18:55.861 "max_io_size": 131072, 00:18:55.861 "io_unit_size": 131072, 00:18:55.861 "max_aq_depth": 128, 00:18:55.861 "num_shared_buffers": 511, 00:18:55.861 "buf_cache_size": 4294967295, 00:18:55.861 "dif_insert_or_strip": false, 00:18:55.861 "zcopy": false, 00:18:55.861 "c2h_success": false, 00:18:55.861 "sock_priority": 0, 00:18:55.861 "abort_timeout_sec": 1, 00:18:55.861 "ack_timeout": 0, 00:18:55.861 "data_wr_pool_size": 0 00:18:55.861 } 00:18:55.861 }, 00:18:55.861 { 00:18:55.861 "method": "nvmf_create_subsystem", 00:18:55.861 "params": { 00:18:55.861 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.861 "allow_any_host": false, 00:18:55.861 "serial_number": "SPDK00000000000001", 00:18:55.861 "model_number": "SPDK bdev Controller", 00:18:55.861 "max_namespaces": 10, 00:18:55.861 "min_cntlid": 1, 00:18:55.861 "max_cntlid": 65519, 00:18:55.861 "ana_reporting": false 00:18:55.861 } 00:18:55.861 }, 00:18:55.861 { 00:18:55.861 "method": "nvmf_subsystem_add_host", 00:18:55.861 "params": { 00:18:55.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.862 "host": "nqn.2016-06.io.spdk:host1", 00:18:55.862 "psk": "/tmp/tmp.nyo2u5M258" 00:18:55.862 } 00:18:55.862 }, 00:18:55.862 { 00:18:55.862 "method": "nvmf_subsystem_add_ns", 00:18:55.862 "params": { 00:18:55.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.862 "namespace": { 00:18:55.862 "nsid": 1, 00:18:55.862 "bdev_name": "malloc0", 00:18:55.862 "nguid": "A5F32DEBD3F44608A28764D5E70B7CF4", 00:18:55.862 "uuid": "a5f32deb-d3f4-4608-a287-64d5e70b7cf4", 00:18:55.862 "no_auto_visible": false 00:18:55.862 } 00:18:55.862 } 00:18:55.862 }, 00:18:55.862 { 00:18:55.862 "method": "nvmf_subsystem_add_listener", 00:18:55.862 "params": { 00:18:55.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.862 "listen_address": { 00:18:55.862 "trtype": "TCP", 00:18:55.862 "adrfam": "IPv4", 00:18:55.862 "traddr": "10.0.0.2", 00:18:55.862 "trsvcid": "4420" 00:18:55.862 }, 00:18:55.862 "secure_channel": true 00:18:55.862 } 00:18:55.862 } 00:18:55.862 ] 00:18:55.862 } 00:18:55.862 ] 00:18:55.862 }' 00:18:55.862 21:40:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:55.862 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:55.862 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.862 21:40:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=375112 00:18:55.862 21:40:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:55.862 21:40:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 375112 00:18:55.862 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 375112 ']' 00:18:55.862 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.862 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:55.862 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.862 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:55.862 21:40:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.862 [2024-07-15 21:40:46.612771] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:18:55.862 [2024-07-15 21:40:46.612867] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.862 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.119 [2024-07-15 21:40:46.673432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.119 [2024-07-15 21:40:46.775358] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.119 [2024-07-15 21:40:46.775412] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.119 [2024-07-15 21:40:46.775437] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.119 [2024-07-15 21:40:46.775449] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.119 [2024-07-15 21:40:46.775459] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.119 [2024-07-15 21:40:46.775538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.375 [2024-07-15 21:40:46.982639] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:56.375 [2024-07-15 21:40:46.998587] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:56.375 [2024-07-15 21:40:47.014657] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:56.375 [2024-07-15 21:40:47.025282] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.940 21:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:56.940 21:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:56.940 21:40:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:56.940 21:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:56.940 21:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.940 21:40:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.940 21:40:47 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=375190 00:18:56.940 21:40:47 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 375190 /var/tmp/bdevperf.sock 00:18:56.940 21:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 375190 ']' 00:18:56.940 21:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:56.940 21:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:56.940 21:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:56.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:56.940 21:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:56.940 21:40:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.940 21:40:47 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:56.940 21:40:47 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:56.940 "subsystems": [ 00:18:56.940 { 00:18:56.940 "subsystem": "keyring", 00:18:56.940 "config": [] 00:18:56.940 }, 00:18:56.940 { 00:18:56.940 "subsystem": "iobuf", 00:18:56.940 "config": [ 00:18:56.940 { 00:18:56.940 "method": "iobuf_set_options", 00:18:56.940 "params": { 00:18:56.940 "small_pool_count": 8192, 00:18:56.940 "large_pool_count": 1024, 00:18:56.940 "small_bufsize": 8192, 00:18:56.940 "large_bufsize": 135168 00:18:56.940 } 00:18:56.940 } 00:18:56.940 ] 00:18:56.940 }, 00:18:56.940 { 00:18:56.940 "subsystem": "sock", 00:18:56.940 "config": [ 00:18:56.940 { 00:18:56.940 "method": "sock_set_default_impl", 00:18:56.940 "params": { 00:18:56.940 "impl_name": "posix" 00:18:56.940 } 00:18:56.940 }, 00:18:56.940 { 00:18:56.940 "method": "sock_impl_set_options", 00:18:56.940 "params": { 00:18:56.940 "impl_name": "ssl", 00:18:56.940 "recv_buf_size": 4096, 00:18:56.940 "send_buf_size": 4096, 00:18:56.940 "enable_recv_pipe": true, 00:18:56.940 "enable_quickack": false, 00:18:56.940 "enable_placement_id": 0, 00:18:56.940 "enable_zerocopy_send_server": true, 00:18:56.940 "enable_zerocopy_send_client": false, 00:18:56.940 "zerocopy_threshold": 0, 00:18:56.940 "tls_version": 0, 00:18:56.940 "enable_ktls": false 00:18:56.940 } 00:18:56.940 }, 00:18:56.940 { 00:18:56.940 "method": "sock_impl_set_options", 00:18:56.940 "params": { 00:18:56.940 "impl_name": "posix", 00:18:56.940 "recv_buf_size": 2097152, 00:18:56.940 "send_buf_size": 2097152, 00:18:56.940 "enable_recv_pipe": true, 00:18:56.940 "enable_quickack": false, 00:18:56.940 "enable_placement_id": 0, 00:18:56.940 "enable_zerocopy_send_server": true, 00:18:56.940 "enable_zerocopy_send_client": false, 00:18:56.940 "zerocopy_threshold": 0, 00:18:56.940 "tls_version": 0, 00:18:56.940 "enable_ktls": false 00:18:56.940 } 00:18:56.940 } 00:18:56.940 ] 00:18:56.940 }, 00:18:56.940 { 00:18:56.940 "subsystem": "vmd", 00:18:56.940 "config": [] 00:18:56.940 }, 00:18:56.940 { 00:18:56.940 "subsystem": "accel", 00:18:56.940 "config": [ 00:18:56.940 { 00:18:56.940 "method": "accel_set_options", 00:18:56.940 "params": { 00:18:56.940 "small_cache_size": 128, 00:18:56.940 "large_cache_size": 16, 00:18:56.940 "task_count": 2048, 00:18:56.940 "sequence_count": 2048, 00:18:56.940 "buf_count": 2048 00:18:56.940 } 00:18:56.940 } 00:18:56.940 ] 00:18:56.940 }, 00:18:56.940 { 00:18:56.940 "subsystem": "bdev", 00:18:56.940 "config": [ 00:18:56.940 { 00:18:56.940 "method": "bdev_set_options", 00:18:56.940 "params": { 00:18:56.940 "bdev_io_pool_size": 65535, 00:18:56.940 "bdev_io_cache_size": 256, 00:18:56.940 "bdev_auto_examine": true, 00:18:56.940 "iobuf_small_cache_size": 128, 00:18:56.940 "iobuf_large_cache_size": 16 00:18:56.940 } 00:18:56.940 }, 00:18:56.940 { 00:18:56.940 "method": "bdev_raid_set_options", 00:18:56.940 "params": { 00:18:56.940 "process_window_size_kb": 1024 00:18:56.940 } 00:18:56.940 }, 00:18:56.940 { 00:18:56.940 "method": "bdev_iscsi_set_options", 00:18:56.940 "params": { 00:18:56.940 "timeout_sec": 30 00:18:56.940 } 00:18:56.940 }, 00:18:56.940 { 00:18:56.940 "method": "bdev_nvme_set_options", 00:18:56.940 "params": { 00:18:56.940 "action_on_timeout": "none", 00:18:56.940 "timeout_us": 0, 00:18:56.940 "timeout_admin_us": 0, 00:18:56.940 "keep_alive_timeout_ms": 10000, 00:18:56.940 "arbitration_burst": 0, 00:18:56.940 "low_priority_weight": 0, 00:18:56.940 "medium_priority_weight": 0, 00:18:56.940 "high_priority_weight": 0, 00:18:56.940 "nvme_adminq_poll_period_us": 10000, 00:18:56.940 "nvme_ioq_poll_period_us": 0, 00:18:56.940 "io_queue_requests": 512, 00:18:56.940 "delay_cmd_submit": true, 00:18:56.940 "transport_retry_count": 4, 00:18:56.940 "bdev_retry_count": 3, 00:18:56.940 "transport_ack_timeout": 0, 00:18:56.940 "ctrlr_loss_timeout_sec": 0, 00:18:56.940 "reconnect_delay_sec": 0, 00:18:56.940 "fast_io_fail_timeout_sec": 0, 00:18:56.940 "disable_auto_failback": false, 00:18:56.940 "generate_uuids": false, 00:18:56.940 "transport_tos": 0, 00:18:56.940 "nvme_error_stat": false, 00:18:56.940 "rdma_srq_size": 0, 00:18:56.940 "io_path_stat": false, 00:18:56.940 "allow_accel_sequence": false, 00:18:56.940 "rdma_max_cq_size": 0, 00:18:56.940 "rdma_cm_event_timeout_ms": 0, 00:18:56.941 "dhchap_digests": [ 00:18:56.941 "sha256", 00:18:56.941 "sha384", 00:18:56.941 "sha512" 00:18:56.941 ], 00:18:56.941 "dhchap_dhgroups": [ 00:18:56.941 "null", 00:18:56.941 "ffdhe2048", 00:18:56.941 "ffdhe3072", 00:18:56.941 "ffdhe4096", 00:18:56.941 "ffdhe6144", 00:18:56.941 "ffdhe8192" 00:18:56.941 ] 00:18:56.941 } 00:18:56.941 }, 00:18:56.941 { 00:18:56.941 "method": "bdev_nvme_attach_controller", 00:18:56.941 "params": { 00:18:56.941 "name": "TLSTEST", 00:18:56.941 "trtype": "TCP", 00:18:56.941 "adrfam": "IPv4", 00:18:56.941 "traddr": "10.0.0.2", 00:18:56.941 "trsvcid": "4420", 00:18:56.941 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:56.941 "prchk_reftag": false, 00:18:56.941 "prchk_guard": false, 00:18:56.941 "ctrlr_loss_timeout_sec": 0, 00:18:56.941 "reconnect_delay_sec": 0, 00:18:56.941 "fast_io_fail_timeout_sec": 0, 00:18:56.941 "psk": "/tmp/tmp.nyo2u5M258", 00:18:56.941 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:56.941 "hdgst": false, 00:18:56.941 "ddgst": false 00:18:56.941 } 00:18:56.941 }, 00:18:56.941 { 00:18:56.941 "method": "bdev_nvme_set_hotplug", 00:18:56.941 "params": { 00:18:56.941 "period_us": 100000, 00:18:56.941 "enable": false 00:18:56.941 } 00:18:56.941 }, 00:18:56.941 { 00:18:56.941 "method": "bdev_wait_for_examine" 00:18:56.941 } 00:18:56.941 ] 00:18:56.941 }, 00:18:56.941 { 00:18:56.941 "subsystem": "nbd", 00:18:56.941 "config": [] 00:18:56.941 } 00:18:56.941 ] 00:18:56.941 }' 00:18:56.941 [2024-07-15 21:40:47.638611] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:18:56.941 [2024-07-15 21:40:47.638694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid375190 ] 00:18:56.941 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.941 [2024-07-15 21:40:47.691292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.198 [2024-07-15 21:40:47.823838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:57.198 [2024-07-15 21:40:47.975920] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:57.198 [2024-07-15 21:40:47.976049] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:58.126 21:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:58.126 21:40:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:58.126 21:40:48 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:58.126 Running I/O for 10 seconds... 00:19:08.087 00:19:08.087 Latency(us) 00:19:08.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.087 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:08.087 Verification LBA range: start 0x0 length 0x2000 00:19:08.087 TLSTESTn1 : 10.02 3597.99 14.05 0.00 0.00 35513.70 8398.32 51263.72 00:19:08.087 =================================================================================================================== 00:19:08.087 Total : 3597.99 14.05 0.00 0.00 35513.70 8398.32 51263.72 00:19:08.087 0 00:19:08.088 21:40:58 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:08.088 21:40:58 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 375190 00:19:08.088 21:40:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 375190 ']' 00:19:08.088 21:40:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 375190 00:19:08.088 21:40:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:08.088 21:40:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:08.088 21:40:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 375190 00:19:08.088 21:40:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:08.088 21:40:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:08.088 21:40:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 375190' 00:19:08.088 killing process with pid 375190 00:19:08.088 21:40:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 375190 00:19:08.088 Received shutdown signal, test time was about 10.000000 seconds 00:19:08.088 00:19:08.088 Latency(us) 00:19:08.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.088 =================================================================================================================== 00:19:08.088 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:08.088 [2024-07-15 21:40:58.827342] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:08.088 21:40:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 375190 00:19:08.346 21:40:59 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 375112 00:19:08.346 21:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 375112 ']' 00:19:08.346 21:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 375112 00:19:08.346 21:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:08.346 21:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:08.346 21:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 375112 00:19:08.346 21:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:08.346 21:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:08.346 21:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 375112' 00:19:08.346 killing process with pid 375112 00:19:08.346 21:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 375112 00:19:08.346 [2024-07-15 21:40:59.050108] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:08.346 21:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 375112 00:19:08.603 21:40:59 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:08.603 21:40:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:08.603 21:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:08.603 21:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.603 21:40:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=376287 00:19:08.603 21:40:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:08.603 21:40:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 376287 00:19:08.603 21:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 376287 ']' 00:19:08.603 21:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.603 21:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:08.603 21:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.603 21:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:08.603 21:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.603 [2024-07-15 21:40:59.308307] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:19:08.603 [2024-07-15 21:40:59.308407] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.603 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.603 [2024-07-15 21:40:59.370834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.861 [2024-07-15 21:40:59.476281] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.861 [2024-07-15 21:40:59.476330] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.861 [2024-07-15 21:40:59.476343] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.861 [2024-07-15 21:40:59.476355] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.861 [2024-07-15 21:40:59.476365] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.861 [2024-07-15 21:40:59.476397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.861 21:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:08.861 21:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:08.861 21:40:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:08.861 21:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:08.861 21:40:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.861 21:40:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.861 21:40:59 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.nyo2u5M258 00:19:08.861 21:40:59 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.nyo2u5M258 00:19:08.861 21:40:59 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:09.118 [2024-07-15 21:40:59.870897] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.118 21:40:59 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:09.681 21:41:00 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:09.681 [2024-07-15 21:41:00.460417] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:09.681 [2024-07-15 21:41:00.460599] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.938 21:41:00 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:09.938 malloc0 00:19:09.938 21:41:00 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:10.196 21:41:00 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nyo2u5M258 00:19:10.453 [2024-07-15 21:41:01.178830] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:10.453 21:41:01 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=376507 00:19:10.453 21:41:01 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:10.453 21:41:01 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:10.453 21:41:01 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 376507 /var/tmp/bdevperf.sock 00:19:10.453 21:41:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 376507 ']' 00:19:10.453 21:41:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:10.453 21:41:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:10.453 21:41:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:10.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:10.453 21:41:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:10.453 21:41:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.453 [2024-07-15 21:41:01.233784] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:19:10.453 [2024-07-15 21:41:01.233867] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid376507 ] 00:19:10.712 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.712 [2024-07-15 21:41:01.282468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.712 [2024-07-15 21:41:01.376613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.712 21:41:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:10.712 21:41:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:10.712 21:41:01 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nyo2u5M258 00:19:10.970 21:41:01 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:11.228 [2024-07-15 21:41:01.931286] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:11.228 nvme0n1 00:19:11.486 21:41:02 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:11.486 Running I/O for 1 seconds... 00:19:12.419 00:19:12.419 Latency(us) 00:19:12.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.419 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:12.419 Verification LBA range: start 0x0 length 0x2000 00:19:12.419 nvme0n1 : 1.03 3582.27 13.99 0.00 0.00 35329.43 8009.96 55924.05 00:19:12.419 =================================================================================================================== 00:19:12.419 Total : 3582.27 13.99 0.00 0.00 35329.43 8009.96 55924.05 00:19:12.419 0 00:19:12.419 21:41:03 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 376507 00:19:12.419 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 376507 ']' 00:19:12.419 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 376507 00:19:12.419 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:12.419 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:12.419 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 376507 00:19:12.419 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:12.419 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:12.419 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 376507' 00:19:12.419 killing process with pid 376507 00:19:12.419 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 376507 00:19:12.419 Received shutdown signal, test time was about 1.000000 seconds 00:19:12.419 00:19:12.419 Latency(us) 00:19:12.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.419 =================================================================================================================== 00:19:12.419 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:12.419 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 376507 00:19:12.690 21:41:03 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 376287 00:19:12.690 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 376287 ']' 00:19:12.690 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 376287 00:19:12.690 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:12.690 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:12.690 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 376287 00:19:12.690 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:12.690 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:12.690 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 376287' 00:19:12.690 killing process with pid 376287 00:19:12.690 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 376287 00:19:12.690 [2024-07-15 21:41:03.431964] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:12.690 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 376287 00:19:12.947 21:41:03 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:19:12.947 21:41:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:12.947 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:12.947 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.947 21:41:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=376723 00:19:12.947 21:41:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:12.947 21:41:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 376723 00:19:12.947 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 376723 ']' 00:19:12.947 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.947 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:12.947 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.947 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:12.947 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.947 [2024-07-15 21:41:03.687278] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:19:12.947 [2024-07-15 21:41:03.687381] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.947 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.205 [2024-07-15 21:41:03.750944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.205 [2024-07-15 21:41:03.845695] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:13.205 [2024-07-15 21:41:03.845741] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:13.205 [2024-07-15 21:41:03.845765] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:13.205 [2024-07-15 21:41:03.845776] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:13.205 [2024-07-15 21:41:03.845786] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:13.205 [2024-07-15 21:41:03.845815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.205 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:13.205 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:13.205 21:41:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:13.205 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:13.205 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.205 21:41:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.205 21:41:03 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:19:13.205 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.205 21:41:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.205 [2024-07-15 21:41:03.966817] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.205 malloc0 00:19:13.205 [2024-07-15 21:41:03.996463] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:13.205 [2024-07-15 21:41:03.996672] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:13.463 21:41:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.463 21:41:04 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=376743 00:19:13.463 21:41:04 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 376743 /var/tmp/bdevperf.sock 00:19:13.463 21:41:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 376743 ']' 00:19:13.463 21:41:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:13.463 21:41:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:13.463 21:41:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:13.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:13.463 21:41:04 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:13.463 21:41:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:13.463 21:41:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.463 [2024-07-15 21:41:04.072787] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:19:13.463 [2024-07-15 21:41:04.072882] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid376743 ] 00:19:13.463 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.463 [2024-07-15 21:41:04.127117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.463 [2024-07-15 21:41:04.224093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.721 21:41:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:13.721 21:41:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:13.721 21:41:04 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.nyo2u5M258 00:19:13.979 21:41:04 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:14.237 [2024-07-15 21:41:04.899359] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:14.237 nvme0n1 00:19:14.237 21:41:04 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:14.494 Running I/O for 1 seconds... 00:19:15.427 00:19:15.427 Latency(us) 00:19:15.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.427 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:15.427 Verification LBA range: start 0x0 length 0x2000 00:19:15.427 nvme0n1 : 1.02 3579.68 13.98 0.00 0.00 35482.78 6359.42 37671.06 00:19:15.427 =================================================================================================================== 00:19:15.427 Total : 3579.68 13.98 0.00 0.00 35482.78 6359.42 37671.06 00:19:15.427 0 00:19:15.427 21:41:06 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:19:15.427 21:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.427 21:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.685 21:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.685 21:41:06 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:19:15.685 "subsystems": [ 00:19:15.685 { 00:19:15.685 "subsystem": "keyring", 00:19:15.685 "config": [ 00:19:15.685 { 00:19:15.685 "method": "keyring_file_add_key", 00:19:15.685 "params": { 00:19:15.685 "name": "key0", 00:19:15.685 "path": "/tmp/tmp.nyo2u5M258" 00:19:15.685 } 00:19:15.685 } 00:19:15.685 ] 00:19:15.685 }, 00:19:15.685 { 00:19:15.685 "subsystem": "iobuf", 00:19:15.685 "config": [ 00:19:15.685 { 00:19:15.685 "method": "iobuf_set_options", 00:19:15.685 "params": { 00:19:15.685 "small_pool_count": 8192, 00:19:15.685 "large_pool_count": 1024, 00:19:15.685 "small_bufsize": 8192, 00:19:15.685 "large_bufsize": 135168 00:19:15.685 } 00:19:15.685 } 00:19:15.685 ] 00:19:15.685 }, 00:19:15.685 { 00:19:15.685 "subsystem": "sock", 00:19:15.685 "config": [ 00:19:15.685 { 00:19:15.685 "method": "sock_set_default_impl", 00:19:15.685 "params": { 00:19:15.685 "impl_name": "posix" 00:19:15.685 } 00:19:15.685 }, 00:19:15.685 { 00:19:15.685 "method": "sock_impl_set_options", 00:19:15.685 "params": { 00:19:15.685 "impl_name": "ssl", 00:19:15.685 "recv_buf_size": 4096, 00:19:15.685 "send_buf_size": 4096, 00:19:15.685 "enable_recv_pipe": true, 00:19:15.685 "enable_quickack": false, 00:19:15.685 "enable_placement_id": 0, 00:19:15.685 "enable_zerocopy_send_server": true, 00:19:15.685 "enable_zerocopy_send_client": false, 00:19:15.685 "zerocopy_threshold": 0, 00:19:15.685 "tls_version": 0, 00:19:15.685 "enable_ktls": false 00:19:15.685 } 00:19:15.685 }, 00:19:15.685 { 00:19:15.685 "method": "sock_impl_set_options", 00:19:15.685 "params": { 00:19:15.685 "impl_name": "posix", 00:19:15.685 "recv_buf_size": 2097152, 00:19:15.685 "send_buf_size": 2097152, 00:19:15.685 "enable_recv_pipe": true, 00:19:15.685 "enable_quickack": false, 00:19:15.685 "enable_placement_id": 0, 00:19:15.685 "enable_zerocopy_send_server": true, 00:19:15.685 "enable_zerocopy_send_client": false, 00:19:15.685 "zerocopy_threshold": 0, 00:19:15.685 "tls_version": 0, 00:19:15.685 "enable_ktls": false 00:19:15.685 } 00:19:15.685 } 00:19:15.685 ] 00:19:15.685 }, 00:19:15.685 { 00:19:15.685 "subsystem": "vmd", 00:19:15.685 "config": [] 00:19:15.685 }, 00:19:15.685 { 00:19:15.685 "subsystem": "accel", 00:19:15.685 "config": [ 00:19:15.685 { 00:19:15.685 "method": "accel_set_options", 00:19:15.685 "params": { 00:19:15.685 "small_cache_size": 128, 00:19:15.685 "large_cache_size": 16, 00:19:15.685 "task_count": 2048, 00:19:15.685 "sequence_count": 2048, 00:19:15.685 "buf_count": 2048 00:19:15.685 } 00:19:15.685 } 00:19:15.685 ] 00:19:15.685 }, 00:19:15.685 { 00:19:15.685 "subsystem": "bdev", 00:19:15.685 "config": [ 00:19:15.685 { 00:19:15.685 "method": "bdev_set_options", 00:19:15.685 "params": { 00:19:15.685 "bdev_io_pool_size": 65535, 00:19:15.685 "bdev_io_cache_size": 256, 00:19:15.685 "bdev_auto_examine": true, 00:19:15.685 "iobuf_small_cache_size": 128, 00:19:15.685 "iobuf_large_cache_size": 16 00:19:15.685 } 00:19:15.685 }, 00:19:15.685 { 00:19:15.685 "method": "bdev_raid_set_options", 00:19:15.685 "params": { 00:19:15.685 "process_window_size_kb": 1024 00:19:15.685 } 00:19:15.685 }, 00:19:15.685 { 00:19:15.685 "method": "bdev_iscsi_set_options", 00:19:15.685 "params": { 00:19:15.685 "timeout_sec": 30 00:19:15.685 } 00:19:15.685 }, 00:19:15.685 { 00:19:15.685 "method": "bdev_nvme_set_options", 00:19:15.685 "params": { 00:19:15.685 "action_on_timeout": "none", 00:19:15.685 "timeout_us": 0, 00:19:15.685 "timeout_admin_us": 0, 00:19:15.685 "keep_alive_timeout_ms": 10000, 00:19:15.685 "arbitration_burst": 0, 00:19:15.685 "low_priority_weight": 0, 00:19:15.685 "medium_priority_weight": 0, 00:19:15.685 "high_priority_weight": 0, 00:19:15.685 "nvme_adminq_poll_period_us": 10000, 00:19:15.685 "nvme_ioq_poll_period_us": 0, 00:19:15.685 "io_queue_requests": 0, 00:19:15.685 "delay_cmd_submit": true, 00:19:15.685 "transport_retry_count": 4, 00:19:15.685 "bdev_retry_count": 3, 00:19:15.685 "transport_ack_timeout": 0, 00:19:15.685 "ctrlr_loss_timeout_sec": 0, 00:19:15.685 "reconnect_delay_sec": 0, 00:19:15.685 "fast_io_fail_timeout_sec": 0, 00:19:15.685 "disable_auto_failback": false, 00:19:15.685 "generate_uuids": false, 00:19:15.685 "transport_tos": 0, 00:19:15.685 "nvme_error_stat": false, 00:19:15.685 "rdma_srq_size": 0, 00:19:15.685 "io_path_stat": false, 00:19:15.685 "allow_accel_sequence": false, 00:19:15.685 "rdma_max_cq_size": 0, 00:19:15.685 "rdma_cm_event_timeout_ms": 0, 00:19:15.685 "dhchap_digests": [ 00:19:15.685 "sha256", 00:19:15.685 "sha384", 00:19:15.685 "sha512" 00:19:15.685 ], 00:19:15.685 "dhchap_dhgroups": [ 00:19:15.685 "null", 00:19:15.685 "ffdhe2048", 00:19:15.685 "ffdhe3072", 00:19:15.685 "ffdhe4096", 00:19:15.686 "ffdhe6144", 00:19:15.686 "ffdhe8192" 00:19:15.686 ] 00:19:15.686 } 00:19:15.686 }, 00:19:15.686 { 00:19:15.686 "method": "bdev_nvme_set_hotplug", 00:19:15.686 "params": { 00:19:15.686 "period_us": 100000, 00:19:15.686 "enable": false 00:19:15.686 } 00:19:15.686 }, 00:19:15.686 { 00:19:15.686 "method": "bdev_malloc_create", 00:19:15.686 "params": { 00:19:15.686 "name": "malloc0", 00:19:15.686 "num_blocks": 8192, 00:19:15.686 "block_size": 4096, 00:19:15.686 "physical_block_size": 4096, 00:19:15.686 "uuid": "fa26941e-2cc9-4162-8262-910a33f0731f", 00:19:15.686 "optimal_io_boundary": 0 00:19:15.686 } 00:19:15.686 }, 00:19:15.686 { 00:19:15.686 "method": "bdev_wait_for_examine" 00:19:15.686 } 00:19:15.686 ] 00:19:15.686 }, 00:19:15.686 { 00:19:15.686 "subsystem": "nbd", 00:19:15.686 "config": [] 00:19:15.686 }, 00:19:15.686 { 00:19:15.686 "subsystem": "scheduler", 00:19:15.686 "config": [ 00:19:15.686 { 00:19:15.686 "method": "framework_set_scheduler", 00:19:15.686 "params": { 00:19:15.686 "name": "static" 00:19:15.686 } 00:19:15.686 } 00:19:15.686 ] 00:19:15.686 }, 00:19:15.686 { 00:19:15.686 "subsystem": "nvmf", 00:19:15.686 "config": [ 00:19:15.686 { 00:19:15.686 "method": "nvmf_set_config", 00:19:15.686 "params": { 00:19:15.686 "discovery_filter": "match_any", 00:19:15.686 "admin_cmd_passthru": { 00:19:15.686 "identify_ctrlr": false 00:19:15.686 } 00:19:15.686 } 00:19:15.686 }, 00:19:15.686 { 00:19:15.686 "method": "nvmf_set_max_subsystems", 00:19:15.686 "params": { 00:19:15.686 "max_subsystems": 1024 00:19:15.686 } 00:19:15.686 }, 00:19:15.686 { 00:19:15.686 "method": "nvmf_set_crdt", 00:19:15.686 "params": { 00:19:15.686 "crdt1": 0, 00:19:15.686 "crdt2": 0, 00:19:15.686 "crdt3": 0 00:19:15.686 } 00:19:15.686 }, 00:19:15.686 { 00:19:15.686 "method": "nvmf_create_transport", 00:19:15.686 "params": { 00:19:15.686 "trtype": "TCP", 00:19:15.686 "max_queue_depth": 128, 00:19:15.686 "max_io_qpairs_per_ctrlr": 127, 00:19:15.686 "in_capsule_data_size": 4096, 00:19:15.686 "max_io_size": 131072, 00:19:15.686 "io_unit_size": 131072, 00:19:15.686 "max_aq_depth": 128, 00:19:15.686 "num_shared_buffers": 511, 00:19:15.686 "buf_cache_size": 4294967295, 00:19:15.686 "dif_insert_or_strip": false, 00:19:15.686 "zcopy": false, 00:19:15.686 "c2h_success": false, 00:19:15.686 "sock_priority": 0, 00:19:15.686 "abort_timeout_sec": 1, 00:19:15.686 "ack_timeout": 0, 00:19:15.686 "data_wr_pool_size": 0 00:19:15.686 } 00:19:15.686 }, 00:19:15.686 { 00:19:15.686 "method": "nvmf_create_subsystem", 00:19:15.686 "params": { 00:19:15.686 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.686 "allow_any_host": false, 00:19:15.686 "serial_number": "00000000000000000000", 00:19:15.686 "model_number": "SPDK bdev Controller", 00:19:15.686 "max_namespaces": 32, 00:19:15.686 "min_cntlid": 1, 00:19:15.686 "max_cntlid": 65519, 00:19:15.686 "ana_reporting": false 00:19:15.686 } 00:19:15.686 }, 00:19:15.686 { 00:19:15.686 "method": "nvmf_subsystem_add_host", 00:19:15.686 "params": { 00:19:15.686 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.686 "host": "nqn.2016-06.io.spdk:host1", 00:19:15.686 "psk": "key0" 00:19:15.686 } 00:19:15.686 }, 00:19:15.686 { 00:19:15.686 "method": "nvmf_subsystem_add_ns", 00:19:15.686 "params": { 00:19:15.686 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.686 "namespace": { 00:19:15.686 "nsid": 1, 00:19:15.686 "bdev_name": "malloc0", 00:19:15.686 "nguid": "FA26941E2CC941628262910A33F0731F", 00:19:15.686 "uuid": "fa26941e-2cc9-4162-8262-910a33f0731f", 00:19:15.686 "no_auto_visible": false 00:19:15.686 } 00:19:15.686 } 00:19:15.686 }, 00:19:15.686 { 00:19:15.686 "method": "nvmf_subsystem_add_listener", 00:19:15.686 "params": { 00:19:15.686 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.686 "listen_address": { 00:19:15.686 "trtype": "TCP", 00:19:15.686 "adrfam": "IPv4", 00:19:15.686 "traddr": "10.0.0.2", 00:19:15.686 "trsvcid": "4420" 00:19:15.686 }, 00:19:15.686 "secure_channel": false, 00:19:15.686 "sock_impl": "ssl" 00:19:15.686 } 00:19:15.686 } 00:19:15.686 ] 00:19:15.686 } 00:19:15.686 ] 00:19:15.686 }' 00:19:15.686 21:41:06 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:15.945 21:41:06 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:19:15.945 "subsystems": [ 00:19:15.945 { 00:19:15.945 "subsystem": "keyring", 00:19:15.945 "config": [ 00:19:15.945 { 00:19:15.945 "method": "keyring_file_add_key", 00:19:15.945 "params": { 00:19:15.945 "name": "key0", 00:19:15.945 "path": "/tmp/tmp.nyo2u5M258" 00:19:15.945 } 00:19:15.945 } 00:19:15.945 ] 00:19:15.945 }, 00:19:15.945 { 00:19:15.945 "subsystem": "iobuf", 00:19:15.945 "config": [ 00:19:15.945 { 00:19:15.945 "method": "iobuf_set_options", 00:19:15.945 "params": { 00:19:15.945 "small_pool_count": 8192, 00:19:15.945 "large_pool_count": 1024, 00:19:15.945 "small_bufsize": 8192, 00:19:15.945 "large_bufsize": 135168 00:19:15.945 } 00:19:15.945 } 00:19:15.945 ] 00:19:15.945 }, 00:19:15.945 { 00:19:15.945 "subsystem": "sock", 00:19:15.945 "config": [ 00:19:15.945 { 00:19:15.945 "method": "sock_set_default_impl", 00:19:15.945 "params": { 00:19:15.945 "impl_name": "posix" 00:19:15.945 } 00:19:15.945 }, 00:19:15.945 { 00:19:15.945 "method": "sock_impl_set_options", 00:19:15.945 "params": { 00:19:15.945 "impl_name": "ssl", 00:19:15.945 "recv_buf_size": 4096, 00:19:15.945 "send_buf_size": 4096, 00:19:15.945 "enable_recv_pipe": true, 00:19:15.945 "enable_quickack": false, 00:19:15.945 "enable_placement_id": 0, 00:19:15.945 "enable_zerocopy_send_server": true, 00:19:15.945 "enable_zerocopy_send_client": false, 00:19:15.945 "zerocopy_threshold": 0, 00:19:15.945 "tls_version": 0, 00:19:15.945 "enable_ktls": false 00:19:15.945 } 00:19:15.945 }, 00:19:15.945 { 00:19:15.945 "method": "sock_impl_set_options", 00:19:15.945 "params": { 00:19:15.945 "impl_name": "posix", 00:19:15.945 "recv_buf_size": 2097152, 00:19:15.945 "send_buf_size": 2097152, 00:19:15.945 "enable_recv_pipe": true, 00:19:15.945 "enable_quickack": false, 00:19:15.945 "enable_placement_id": 0, 00:19:15.945 "enable_zerocopy_send_server": true, 00:19:15.945 "enable_zerocopy_send_client": false, 00:19:15.945 "zerocopy_threshold": 0, 00:19:15.945 "tls_version": 0, 00:19:15.945 "enable_ktls": false 00:19:15.945 } 00:19:15.945 } 00:19:15.945 ] 00:19:15.945 }, 00:19:15.945 { 00:19:15.945 "subsystem": "vmd", 00:19:15.945 "config": [] 00:19:15.945 }, 00:19:15.945 { 00:19:15.945 "subsystem": "accel", 00:19:15.945 "config": [ 00:19:15.945 { 00:19:15.945 "method": "accel_set_options", 00:19:15.945 "params": { 00:19:15.945 "small_cache_size": 128, 00:19:15.945 "large_cache_size": 16, 00:19:15.945 "task_count": 2048, 00:19:15.945 "sequence_count": 2048, 00:19:15.945 "buf_count": 2048 00:19:15.945 } 00:19:15.945 } 00:19:15.945 ] 00:19:15.945 }, 00:19:15.945 { 00:19:15.945 "subsystem": "bdev", 00:19:15.945 "config": [ 00:19:15.945 { 00:19:15.945 "method": "bdev_set_options", 00:19:15.945 "params": { 00:19:15.945 "bdev_io_pool_size": 65535, 00:19:15.945 "bdev_io_cache_size": 256, 00:19:15.945 "bdev_auto_examine": true, 00:19:15.945 "iobuf_small_cache_size": 128, 00:19:15.945 "iobuf_large_cache_size": 16 00:19:15.945 } 00:19:15.945 }, 00:19:15.945 { 00:19:15.945 "method": "bdev_raid_set_options", 00:19:15.945 "params": { 00:19:15.945 "process_window_size_kb": 1024 00:19:15.945 } 00:19:15.945 }, 00:19:15.945 { 00:19:15.945 "method": "bdev_iscsi_set_options", 00:19:15.945 "params": { 00:19:15.945 "timeout_sec": 30 00:19:15.945 } 00:19:15.945 }, 00:19:15.945 { 00:19:15.945 "method": "bdev_nvme_set_options", 00:19:15.945 "params": { 00:19:15.945 "action_on_timeout": "none", 00:19:15.945 "timeout_us": 0, 00:19:15.945 "timeout_admin_us": 0, 00:19:15.945 "keep_alive_timeout_ms": 10000, 00:19:15.945 "arbitration_burst": 0, 00:19:15.945 "low_priority_weight": 0, 00:19:15.945 "medium_priority_weight": 0, 00:19:15.945 "high_priority_weight": 0, 00:19:15.945 "nvme_adminq_poll_period_us": 10000, 00:19:15.945 "nvme_ioq_poll_period_us": 0, 00:19:15.945 "io_queue_requests": 512, 00:19:15.945 "delay_cmd_submit": true, 00:19:15.945 "transport_retry_count": 4, 00:19:15.945 "bdev_retry_count": 3, 00:19:15.945 "transport_ack_timeout": 0, 00:19:15.945 "ctrlr_loss_timeout_sec": 0, 00:19:15.945 "reconnect_delay_sec": 0, 00:19:15.945 "fast_io_fail_timeout_sec": 0, 00:19:15.945 "disable_auto_failback": false, 00:19:15.945 "generate_uuids": false, 00:19:15.945 "transport_tos": 0, 00:19:15.945 "nvme_error_stat": false, 00:19:15.945 "rdma_srq_size": 0, 00:19:15.945 "io_path_stat": false, 00:19:15.945 "allow_accel_sequence": false, 00:19:15.945 "rdma_max_cq_size": 0, 00:19:15.945 "rdma_cm_event_timeout_ms": 0, 00:19:15.945 "dhchap_digests": [ 00:19:15.945 "sha256", 00:19:15.945 "sha384", 00:19:15.945 "sha512" 00:19:15.945 ], 00:19:15.945 "dhchap_dhgroups": [ 00:19:15.945 "null", 00:19:15.945 "ffdhe2048", 00:19:15.945 "ffdhe3072", 00:19:15.945 "ffdhe4096", 00:19:15.945 "ffdhe6144", 00:19:15.945 "ffdhe8192" 00:19:15.945 ] 00:19:15.945 } 00:19:15.945 }, 00:19:15.945 { 00:19:15.945 "method": "bdev_nvme_attach_controller", 00:19:15.945 "params": { 00:19:15.945 "name": "nvme0", 00:19:15.945 "trtype": "TCP", 00:19:15.945 "adrfam": "IPv4", 00:19:15.945 "traddr": "10.0.0.2", 00:19:15.945 "trsvcid": "4420", 00:19:15.945 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.945 "prchk_reftag": false, 00:19:15.945 "prchk_guard": false, 00:19:15.945 "ctrlr_loss_timeout_sec": 0, 00:19:15.945 "reconnect_delay_sec": 0, 00:19:15.945 "fast_io_fail_timeout_sec": 0, 00:19:15.945 "psk": "key0", 00:19:15.945 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:15.945 "hdgst": false, 00:19:15.945 "ddgst": false 00:19:15.945 } 00:19:15.945 }, 00:19:15.945 { 00:19:15.945 "method": "bdev_nvme_set_hotplug", 00:19:15.945 "params": { 00:19:15.945 "period_us": 100000, 00:19:15.945 "enable": false 00:19:15.945 } 00:19:15.945 }, 00:19:15.945 { 00:19:15.945 "method": "bdev_enable_histogram", 00:19:15.945 "params": { 00:19:15.945 "name": "nvme0n1", 00:19:15.945 "enable": true 00:19:15.945 } 00:19:15.945 }, 00:19:15.945 { 00:19:15.945 "method": "bdev_wait_for_examine" 00:19:15.945 } 00:19:15.945 ] 00:19:15.945 }, 00:19:15.945 { 00:19:15.945 "subsystem": "nbd", 00:19:15.945 "config": [] 00:19:15.945 } 00:19:15.945 ] 00:19:15.945 }' 00:19:15.945 21:41:06 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 376743 00:19:15.945 21:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 376743 ']' 00:19:15.945 21:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 376743 00:19:15.945 21:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:15.945 21:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:15.945 21:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 376743 00:19:15.945 21:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:15.945 21:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:15.945 21:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 376743' 00:19:15.945 killing process with pid 376743 00:19:15.945 21:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 376743 00:19:15.945 Received shutdown signal, test time was about 1.000000 seconds 00:19:15.945 00:19:15.945 Latency(us) 00:19:15.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.945 =================================================================================================================== 00:19:15.945 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:15.945 21:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 376743 00:19:16.204 21:41:06 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 376723 00:19:16.204 21:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 376723 ']' 00:19:16.204 21:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 376723 00:19:16.204 21:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:16.204 21:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:16.204 21:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 376723 00:19:16.204 21:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:16.204 21:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:16.204 21:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 376723' 00:19:16.204 killing process with pid 376723 00:19:16.204 21:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 376723 00:19:16.204 21:41:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 376723 00:19:16.463 21:41:07 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:19:16.463 21:41:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:16.463 21:41:07 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:19:16.463 "subsystems": [ 00:19:16.463 { 00:19:16.463 "subsystem": "keyring", 00:19:16.463 "config": [ 00:19:16.463 { 00:19:16.463 "method": "keyring_file_add_key", 00:19:16.463 "params": { 00:19:16.463 "name": "key0", 00:19:16.463 "path": "/tmp/tmp.nyo2u5M258" 00:19:16.463 } 00:19:16.463 } 00:19:16.463 ] 00:19:16.463 }, 00:19:16.463 { 00:19:16.463 "subsystem": "iobuf", 00:19:16.463 "config": [ 00:19:16.463 { 00:19:16.463 "method": "iobuf_set_options", 00:19:16.463 "params": { 00:19:16.463 "small_pool_count": 8192, 00:19:16.463 "large_pool_count": 1024, 00:19:16.463 "small_bufsize": 8192, 00:19:16.463 "large_bufsize": 135168 00:19:16.463 } 00:19:16.463 } 00:19:16.463 ] 00:19:16.463 }, 00:19:16.463 { 00:19:16.463 "subsystem": "sock", 00:19:16.463 "config": [ 00:19:16.463 { 00:19:16.463 "method": "sock_set_default_impl", 00:19:16.463 "params": { 00:19:16.463 "impl_name": "posix" 00:19:16.463 } 00:19:16.463 }, 00:19:16.463 { 00:19:16.463 "method": "sock_impl_set_options", 00:19:16.463 "params": { 00:19:16.464 "impl_name": "ssl", 00:19:16.464 "recv_buf_size": 4096, 00:19:16.464 "send_buf_size": 4096, 00:19:16.464 "enable_recv_pipe": true, 00:19:16.464 "enable_quickack": false, 00:19:16.464 "enable_placement_id": 0, 00:19:16.464 "enable_zerocopy_send_server": true, 00:19:16.464 "enable_zerocopy_send_client": false, 00:19:16.464 "zerocopy_threshold": 0, 00:19:16.464 "tls_version": 0, 00:19:16.464 "enable_ktls": false 00:19:16.464 } 00:19:16.464 }, 00:19:16.464 { 00:19:16.464 "method": "sock_impl_set_options", 00:19:16.464 "params": { 00:19:16.464 "impl_name": "posix", 00:19:16.464 "recv_buf_size": 2097152, 00:19:16.464 "send_buf_size": 2097152, 00:19:16.464 "enable_recv_pipe": true, 00:19:16.464 "enable_quickack": false, 00:19:16.464 "enable_placement_id": 0, 00:19:16.464 "enable_zerocopy_send_server": true, 00:19:16.464 "enable_zerocopy_send_client": false, 00:19:16.464 "zerocopy_threshold": 0, 00:19:16.464 "tls_version": 0, 00:19:16.464 "enable_ktls": false 00:19:16.464 } 00:19:16.464 } 00:19:16.464 ] 00:19:16.464 }, 00:19:16.464 { 00:19:16.464 "subsystem": "vmd", 00:19:16.464 "config": [] 00:19:16.464 }, 00:19:16.464 { 00:19:16.464 "subsystem": "accel", 00:19:16.464 "config": [ 00:19:16.464 { 00:19:16.464 "method": "accel_set_options", 00:19:16.464 "params": { 00:19:16.464 "small_cache_size": 128, 00:19:16.464 "large_cache_size": 16, 00:19:16.464 "task_count": 2048, 00:19:16.464 "sequence_count": 2048, 00:19:16.464 "buf_count": 2048 00:19:16.464 } 00:19:16.464 } 00:19:16.464 ] 00:19:16.464 }, 00:19:16.464 { 00:19:16.464 "subsystem": "bdev", 00:19:16.464 "config": [ 00:19:16.464 { 00:19:16.464 "method": "bdev_set_options", 00:19:16.464 "params": { 00:19:16.464 "bdev_io_pool_size": 65535, 00:19:16.464 "bdev_io_cache_size": 256, 00:19:16.464 "bdev_auto_examine": true, 00:19:16.464 "iobuf_small_cache_size": 128, 00:19:16.464 "iobuf_large_cache_size": 16 00:19:16.464 } 00:19:16.464 }, 00:19:16.464 { 00:19:16.464 "method": "bdev_raid_set_options", 00:19:16.464 "params": { 00:19:16.464 "process_window_size_kb": 1024 00:19:16.464 } 00:19:16.464 }, 00:19:16.464 { 00:19:16.464 "method": "bdev_iscsi_set_options", 00:19:16.464 "params": { 00:19:16.464 "timeout_sec": 30 00:19:16.464 } 00:19:16.464 }, 00:19:16.464 { 00:19:16.464 "method": "bdev_nvme_set_options", 00:19:16.464 "params": { 00:19:16.464 "action_on_timeout": "none", 00:19:16.464 "timeout_us": 0, 00:19:16.464 "timeout_admin_us": 0, 00:19:16.464 "keep_alive_timeout_ms": 10000, 00:19:16.464 "arbitration_burst": 0, 00:19:16.464 "low_priority_weight": 0, 00:19:16.464 "medium_priority_weight": 0, 00:19:16.464 "high_priority_weight": 0, 00:19:16.464 "nvme_adminq_poll_period_us": 10000, 00:19:16.464 "nvme_ioq_poll_period_us": 0, 00:19:16.464 "io_queue_requests": 0, 00:19:16.464 "delay_cmd_submit": true, 00:19:16.464 "transport_retry_count": 4, 00:19:16.464 "bdev_retry_count": 3, 00:19:16.464 "transport_ack_timeout": 0, 00:19:16.464 "ctrlr_loss_timeout_sec": 0, 00:19:16.464 "reconnect_delay_sec": 0, 00:19:16.464 "fast_io_fail_timeout_sec": 0, 00:19:16.464 "disable_auto_failback": false, 00:19:16.464 "generate_uuids": false, 00:19:16.464 "transport_tos": 0, 00:19:16.464 "nvme_error_stat": false, 00:19:16.464 "rdma_srq_size": 0, 00:19:16.464 "io_path_stat": false, 00:19:16.464 "allow_accel_sequence": false, 00:19:16.464 "rdma_max_cq_size": 0, 00:19:16.464 "rdma_cm_event_timeout_ms": 0, 00:19:16.464 "dhchap_digests": [ 00:19:16.464 "sha256", 00:19:16.464 "sha384", 00:19:16.464 "sha512" 00:19:16.464 ], 00:19:16.464 "dhchap_dhgroups": [ 00:19:16.464 "null", 00:19:16.464 "ffdhe2048", 00:19:16.464 "ffdhe3072", 00:19:16.464 "ffdhe4096", 00:19:16.464 "ffdhe6144", 00:19:16.464 "ffdhe8192" 00:19:16.464 ] 00:19:16.464 } 00:19:16.464 }, 00:19:16.464 { 00:19:16.464 "method": "bdev_nvme_set_hotplug", 00:19:16.464 "params": { 00:19:16.464 "period_us": 100000, 00:19:16.464 "enable": false 00:19:16.464 } 00:19:16.464 }, 00:19:16.464 { 00:19:16.464 "method": "bdev_malloc_create", 00:19:16.464 "params": { 00:19:16.464 "name": "malloc0", 00:19:16.464 "num_blocks": 8192, 00:19:16.464 "block_size": 4096, 00:19:16.464 "physical_block_size": 4096, 00:19:16.464 "uuid": "fa26941e-2cc9-4162-8262-910a33f0731f", 00:19:16.464 "optimal_io_boundary": 0 00:19:16.464 } 00:19:16.464 }, 00:19:16.464 { 00:19:16.464 "method": "bdev_wait_for_examine" 00:19:16.464 } 00:19:16.464 ] 00:19:16.464 }, 00:19:16.464 { 00:19:16.464 "subsystem": "nbd", 00:19:16.464 "config": [] 00:19:16.464 }, 00:19:16.464 { 00:19:16.464 "subsystem": "scheduler", 00:19:16.464 "config": [ 00:19:16.464 { 00:19:16.464 "method": "framework_set_scheduler", 00:19:16.464 "params": { 00:19:16.464 "name": "static" 00:19:16.464 } 00:19:16.464 } 00:19:16.464 ] 00:19:16.464 }, 00:19:16.464 { 00:19:16.464 "subsystem": "nvmf", 00:19:16.464 "config": [ 00:19:16.464 { 00:19:16.464 "method": "nvmf_set_config", 00:19:16.464 "params": { 00:19:16.464 "discovery_filter": "match_any", 00:19:16.464 "admin_cmd_passthru": { 00:19:16.464 "identify_ctrlr": false 00:19:16.464 } 00:19:16.464 } 00:19:16.464 }, 00:19:16.464 { 00:19:16.464 "method": "nvmf_set_max_subsystems", 00:19:16.464 "params": { 00:19:16.464 "max_subsystems": 1024 00:19:16.464 } 00:19:16.464 }, 00:19:16.464 { 00:19:16.464 "method": "nvmf_set_crdt", 00:19:16.464 "params": { 00:19:16.464 "crdt1": 0, 00:19:16.464 "crdt2": 0, 00:19:16.464 "crdt3": 0 00:19:16.464 } 00:19:16.464 }, 00:19:16.464 { 00:19:16.464 "method": "nvmf_create_transport", 00:19:16.464 "params": { 00:19:16.464 "trtype": "TCP", 00:19:16.464 "max_queue_depth": 128, 00:19:16.464 "max_io_qpairs_per_ctrlr": 127, 00:19:16.464 "in_capsule_data_size": 4096, 00:19:16.464 "max_io_size": 131072, 00:19:16.464 "io_unit_size": 131072, 00:19:16.464 "max_aq_depth": 128, 00:19:16.464 "num_shared_buffers": 511, 00:19:16.464 "buf_cache_size": 4294967295, 00:19:16.464 "dif_insert_or_strip": false, 00:19:16.464 "zcopy": false, 00:19:16.464 "c2h_success": false, 00:19:16.464 "sock_priority": 0, 00:19:16.464 "abort_timeout_sec": 1, 00:19:16.464 "ack_timeout": 0, 00:19:16.464 "data_wr_pool_size": 0 00:19:16.464 } 00:19:16.464 }, 00:19:16.464 { 00:19:16.464 "method": "nvmf_create_subsystem", 00:19:16.464 "params": { 00:19:16.464 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.464 "allow_any_host": false, 00:19:16.464 "serial_number": "00000000000000000000", 00:19:16.464 "model_number": "SPDK bdev Controller", 00:19:16.464 "max_namespaces": 32, 00:19:16.464 "min_cntlid": 1, 00:19:16.464 "max_cntlid": 65519, 00:19:16.464 "ana_reporting": false 00:19:16.464 } 00:19:16.464 }, 00:19:16.464 { 00:19:16.464 "method": "nvmf_subsystem_add_host", 00:19:16.464 "params": { 00:19:16.464 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.464 "host": "nqn.2016-06.io.spdk:host1", 00:19:16.464 "psk": "key0" 00:19:16.464 } 00:19:16.464 }, 00:19:16.464 { 00:19:16.464 "method": "nvmf_subsystem_add_ns", 00:19:16.464 "params": { 00:19:16.464 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.464 "namespace": { 00:19:16.464 "nsid": 1, 00:19:16.464 "bdev_name": "malloc0", 00:19:16.464 "nguid": "FA26941E2CC941628262910A33F0731F", 00:19:16.464 "uuid": "fa26941e-2cc9-4162-8262-910a33f0731f", 00:19:16.464 "no_auto_visible": false 00:19:16.464 } 00:19:16.464 } 00:19:16.464 }, 00:19:16.464 { 00:19:16.464 "method": "nvmf_subsystem_add_listener", 00:19:16.464 "params": { 00:19:16.464 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.464 "listen_address": { 00:19:16.464 "trtype": "TCP", 00:19:16.464 "adrfam": "IPv4", 00:19:16.464 "traddr": "10.0.0.2", 00:19:16.464 "trsvcid": "4420" 00:19:16.464 }, 00:19:16.464 "secure_channel": false, 00:19:16.464 "sock_impl": "ssl" 00:19:16.464 } 00:19:16.464 } 00:19:16.464 ] 00:19:16.464 } 00:19:16.464 ] 00:19:16.464 }' 00:19:16.464 21:41:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:16.464 21:41:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.464 21:41:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=377054 00:19:16.464 21:41:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:16.464 21:41:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 377054 00:19:16.464 21:41:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 377054 ']' 00:19:16.464 21:41:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.464 21:41:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:16.464 21:41:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.464 21:41:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:16.464 21:41:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.464 [2024-07-15 21:41:07.066561] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:19:16.464 [2024-07-15 21:41:07.066662] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.464 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.465 [2024-07-15 21:41:07.129093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.465 [2024-07-15 21:41:07.233595] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.465 [2024-07-15 21:41:07.233643] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.465 [2024-07-15 21:41:07.233657] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.465 [2024-07-15 21:41:07.233668] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.465 [2024-07-15 21:41:07.233679] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.465 [2024-07-15 21:41:07.233762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.723 [2024-07-15 21:41:07.455117] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:16.723 [2024-07-15 21:41:07.487113] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:16.723 [2024-07-15 21:41:07.505289] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.981 21:41:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:16.981 21:41:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:16.981 21:41:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:16.981 21:41:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:16.981 21:41:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.981 21:41:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.981 21:41:07 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=377178 00:19:16.981 21:41:07 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 377178 /var/tmp/bdevperf.sock 00:19:16.981 21:41:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 377178 ']' 00:19:16.981 21:41:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:16.981 21:41:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:16.981 21:41:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:16.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:16.981 21:41:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:16.981 21:41:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.981 21:41:07 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:16.981 21:41:07 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:19:16.981 "subsystems": [ 00:19:16.981 { 00:19:16.981 "subsystem": "keyring", 00:19:16.981 "config": [ 00:19:16.981 { 00:19:16.981 "method": "keyring_file_add_key", 00:19:16.981 "params": { 00:19:16.981 "name": "key0", 00:19:16.981 "path": "/tmp/tmp.nyo2u5M258" 00:19:16.981 } 00:19:16.981 } 00:19:16.981 ] 00:19:16.981 }, 00:19:16.981 { 00:19:16.981 "subsystem": "iobuf", 00:19:16.981 "config": [ 00:19:16.981 { 00:19:16.981 "method": "iobuf_set_options", 00:19:16.981 "params": { 00:19:16.981 "small_pool_count": 8192, 00:19:16.981 "large_pool_count": 1024, 00:19:16.981 "small_bufsize": 8192, 00:19:16.981 "large_bufsize": 135168 00:19:16.981 } 00:19:16.981 } 00:19:16.981 ] 00:19:16.981 }, 00:19:16.981 { 00:19:16.981 "subsystem": "sock", 00:19:16.981 "config": [ 00:19:16.981 { 00:19:16.981 "method": "sock_set_default_impl", 00:19:16.981 "params": { 00:19:16.981 "impl_name": "posix" 00:19:16.981 } 00:19:16.981 }, 00:19:16.981 { 00:19:16.981 "method": "sock_impl_set_options", 00:19:16.981 "params": { 00:19:16.981 "impl_name": "ssl", 00:19:16.981 "recv_buf_size": 4096, 00:19:16.981 "send_buf_size": 4096, 00:19:16.981 "enable_recv_pipe": true, 00:19:16.981 "enable_quickack": false, 00:19:16.981 "enable_placement_id": 0, 00:19:16.981 "enable_zerocopy_send_server": true, 00:19:16.981 "enable_zerocopy_send_client": false, 00:19:16.981 "zerocopy_threshold": 0, 00:19:16.981 "tls_version": 0, 00:19:16.981 "enable_ktls": false 00:19:16.981 } 00:19:16.981 }, 00:19:16.981 { 00:19:16.981 "method": "sock_impl_set_options", 00:19:16.981 "params": { 00:19:16.981 "impl_name": "posix", 00:19:16.981 "recv_buf_size": 2097152, 00:19:16.981 "send_buf_size": 2097152, 00:19:16.981 "enable_recv_pipe": true, 00:19:16.981 "enable_quickack": false, 00:19:16.981 "enable_placement_id": 0, 00:19:16.981 "enable_zerocopy_send_server": true, 00:19:16.981 "enable_zerocopy_send_client": false, 00:19:16.981 "zerocopy_threshold": 0, 00:19:16.981 "tls_version": 0, 00:19:16.981 "enable_ktls": false 00:19:16.981 } 00:19:16.981 } 00:19:16.981 ] 00:19:16.981 }, 00:19:16.981 { 00:19:16.981 "subsystem": "vmd", 00:19:16.981 "config": [] 00:19:16.981 }, 00:19:16.981 { 00:19:16.981 "subsystem": "accel", 00:19:16.981 "config": [ 00:19:16.981 { 00:19:16.981 "method": "accel_set_options", 00:19:16.981 "params": { 00:19:16.981 "small_cache_size": 128, 00:19:16.981 "large_cache_size": 16, 00:19:16.981 "task_count": 2048, 00:19:16.981 "sequence_count": 2048, 00:19:16.981 "buf_count": 2048 00:19:16.981 } 00:19:16.981 } 00:19:16.981 ] 00:19:16.981 }, 00:19:16.981 { 00:19:16.981 "subsystem": "bdev", 00:19:16.981 "config": [ 00:19:16.981 { 00:19:16.981 "method": "bdev_set_options", 00:19:16.981 "params": { 00:19:16.981 "bdev_io_pool_size": 65535, 00:19:16.981 "bdev_io_cache_size": 256, 00:19:16.981 "bdev_auto_examine": true, 00:19:16.981 "iobuf_small_cache_size": 128, 00:19:16.981 "iobuf_large_cache_size": 16 00:19:16.981 } 00:19:16.981 }, 00:19:16.981 { 00:19:16.981 "method": "bdev_raid_set_options", 00:19:16.981 "params": { 00:19:16.981 "process_window_size_kb": 1024 00:19:16.981 } 00:19:16.981 }, 00:19:16.981 { 00:19:16.981 "method": "bdev_iscsi_set_options", 00:19:16.981 "params": { 00:19:16.981 "timeout_sec": 30 00:19:16.981 } 00:19:16.981 }, 00:19:16.981 { 00:19:16.981 "method": "bdev_nvme_set_options", 00:19:16.981 "params": { 00:19:16.981 "action_on_timeout": "none", 00:19:16.981 "timeout_us": 0, 00:19:16.981 "timeout_admin_us": 0, 00:19:16.981 "keep_alive_timeout_ms": 10000, 00:19:16.981 "arbitration_burst": 0, 00:19:16.981 "low_priority_weight": 0, 00:19:16.981 "medium_priority_weight": 0, 00:19:16.981 "high_priority_weight": 0, 00:19:16.981 "nvme_adminq_poll_period_us": 10000, 00:19:16.981 "nvme_ioq_poll_period_us": 0, 00:19:16.981 "io_queue_requests": 512, 00:19:16.981 "delay_cmd_submit": true, 00:19:16.981 "transport_retry_count": 4, 00:19:16.981 "bdev_retry_count": 3, 00:19:16.981 "transport_ack_timeout": 0, 00:19:16.981 "ctrlr_loss_timeout_sec": 0, 00:19:16.981 "reconnect_delay_sec": 0, 00:19:16.981 "fast_io_fail_timeout_sec": 0, 00:19:16.981 "disable_auto_failback": false, 00:19:16.981 "generate_uuids": false, 00:19:16.981 "transport_tos": 0, 00:19:16.981 "nvme_error_stat": false, 00:19:16.981 "rdma_srq_size": 0, 00:19:16.981 "io_path_stat": false, 00:19:16.981 "allow_accel_sequence": false, 00:19:16.981 "rdma_max_cq_size": 0, 00:19:16.981 "rdma_cm_event_timeout_ms": 0, 00:19:16.981 "dhchap_digests": [ 00:19:16.981 "sha256", 00:19:16.981 "sha384", 00:19:16.981 "sha512" 00:19:16.981 ], 00:19:16.981 "dhchap_dhgroups": [ 00:19:16.981 "null", 00:19:16.981 "ffdhe2048", 00:19:16.981 "ffdhe3072", 00:19:16.982 "ffdhe4096", 00:19:16.982 "ffdhe6144", 00:19:16.982 "ffdhe8192" 00:19:16.982 ] 00:19:16.982 } 00:19:16.982 }, 00:19:16.982 { 00:19:16.982 "method": "bdev_nvme_attach_controller", 00:19:16.982 "params": { 00:19:16.982 "name": "nvme0", 00:19:16.982 "trtype": "TCP", 00:19:16.982 "adrfam": "IPv4", 00:19:16.982 "traddr": "10.0.0.2", 00:19:16.982 "trsvcid": "4420", 00:19:16.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.982 "prchk_reftag": false, 00:19:16.982 "prchk_guard": false, 00:19:16.982 "ctrlr_loss_timeout_sec": 0, 00:19:16.982 "reconnect_delay_sec": 0, 00:19:16.982 "fast_io_fail_timeout_sec": 0, 00:19:16.982 "psk": "key0", 00:19:16.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:16.982 "hdgst": false, 00:19:16.982 "ddgst": false 00:19:16.982 } 00:19:16.982 }, 00:19:16.982 { 00:19:16.982 "method": "bdev_nvme_set_hotplug", 00:19:16.982 "params": { 00:19:16.982 "period_us": 100000, 00:19:16.982 "enable": false 00:19:16.982 } 00:19:16.982 }, 00:19:16.982 { 00:19:16.982 "method": "bdev_enable_histogram", 00:19:16.982 "params": { 00:19:16.982 "name": "nvme0n1", 00:19:16.982 "enable": true 00:19:16.982 } 00:19:16.982 }, 00:19:16.982 { 00:19:16.982 "method": "bdev_wait_for_examine" 00:19:16.982 } 00:19:16.982 ] 00:19:16.982 }, 00:19:16.982 { 00:19:16.982 "subsystem": "nbd", 00:19:16.982 "config": [] 00:19:16.982 } 00:19:16.982 ] 00:19:16.982 }' 00:19:16.982 [2024-07-15 21:41:07.602416] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:19:16.982 [2024-07-15 21:41:07.602520] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid377178 ] 00:19:16.982 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.982 [2024-07-15 21:41:07.658919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.982 [2024-07-15 21:41:07.765862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.240 [2024-07-15 21:41:07.927547] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:17.805 21:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:17.805 21:41:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:17.805 21:41:08 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:17.805 21:41:08 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:19:18.372 21:41:08 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.372 21:41:08 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:18.372 Running I/O for 1 seconds... 00:19:19.306 00:19:19.306 Latency(us) 00:19:19.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.306 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:19.306 Verification LBA range: start 0x0 length 0x2000 00:19:19.306 nvme0n1 : 1.02 3643.32 14.23 0.00 0.00 34814.13 7524.50 50098.63 00:19:19.306 =================================================================================================================== 00:19:19.306 Total : 3643.32 14.23 0.00 0.00 34814.13 7524.50 50098.63 00:19:19.306 0 00:19:19.306 21:41:10 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:19:19.306 21:41:10 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:19:19.306 21:41:10 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:19.306 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:19:19.306 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:19:19.306 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:19.306 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:19.306 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:19.306 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:19.306 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:19.306 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:19.306 nvmf_trace.0 00:19:19.564 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:19:19.564 21:41:10 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 377178 00:19:19.564 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 377178 ']' 00:19:19.564 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 377178 00:19:19.564 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:19.564 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:19.564 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 377178 00:19:19.565 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:19.565 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:19.565 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 377178' 00:19:19.565 killing process with pid 377178 00:19:19.565 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 377178 00:19:19.565 Received shutdown signal, test time was about 1.000000 seconds 00:19:19.565 00:19:19.565 Latency(us) 00:19:19.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.565 =================================================================================================================== 00:19:19.565 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:19.565 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 377178 00:19:19.565 21:41:10 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:19.565 21:41:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:19.565 21:41:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:19:19.565 21:41:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:19.565 21:41:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:19:19.565 21:41:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:19.565 21:41:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:19.565 rmmod nvme_tcp 00:19:19.565 rmmod nvme_fabrics 00:19:19.824 rmmod nvme_keyring 00:19:19.824 21:41:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:19.824 21:41:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:19:19.824 21:41:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:19:19.824 21:41:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 377054 ']' 00:19:19.824 21:41:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 377054 00:19:19.824 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 377054 ']' 00:19:19.824 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 377054 00:19:19.824 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:19.824 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:19.824 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 377054 00:19:19.824 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:19.824 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:19.824 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 377054' 00:19:19.824 killing process with pid 377054 00:19:19.824 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 377054 00:19:19.824 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 377054 00:19:19.824 21:41:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:19.824 21:41:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:19.824 21:41:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:19.824 21:41:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:19.824 21:41:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:19.824 21:41:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.084 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:20.084 21:41:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.018 21:41:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:22.018 21:41:12 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.LJPVOsPDMh /tmp/tmp.nEHqygOd2J /tmp/tmp.nyo2u5M258 00:19:22.018 00:19:22.018 real 1m18.275s 00:19:22.018 user 2m12.084s 00:19:22.018 sys 0m21.985s 00:19:22.018 21:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:22.018 21:41:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.018 ************************************ 00:19:22.018 END TEST nvmf_tls 00:19:22.018 ************************************ 00:19:22.018 21:41:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:22.018 21:41:12 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:22.018 21:41:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:22.018 21:41:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:22.018 21:41:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:22.018 ************************************ 00:19:22.018 START TEST nvmf_fips 00:19:22.018 ************************************ 00:19:22.018 21:41:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:22.018 * Looking for test storage... 00:19:22.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:22.018 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:22.018 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:22.018 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:22.018 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:22.018 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:22.018 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:22.018 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:22.018 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:22.018 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:22.018 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:22.018 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:22.018 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:22.018 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:22.018 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:19:22.018 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:22.018 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:22.018 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:22.018 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:22.018 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:22.018 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:22.018 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:22.018 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:22.019 21:41:12 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.019 21:41:12 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.019 21:41:12 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.019 21:41:12 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:22.019 21:41:12 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.019 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:19:22.019 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:22.019 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:22.019 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:22.019 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:22.019 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:22.019 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:22.019 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:22.019 21:41:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:22.019 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:22.019 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:19:22.019 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:19:22.019 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:19:22.019 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:19:22.277 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:22.278 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:22.278 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:22.278 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:22.278 21:41:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:19:22.278 21:41:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:22.278 21:41:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:19:22.278 21:41:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:19:22.278 21:41:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:22.278 21:41:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:19:22.278 21:41:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:22.278 21:41:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:19:22.278 21:41:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:22.278 21:41:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:19:22.278 21:41:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:19:22.278 21:41:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:19:22.278 Error setting digest 00:19:22.278 0022E253847F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:22.278 0022E253847F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:22.278 21:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:19:22.278 21:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:22.278 21:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:22.278 21:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:22.278 21:41:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:19:22.278 21:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:22.278 21:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:22.278 21:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:22.278 21:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:22.278 21:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:22.278 21:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.278 21:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:22.278 21:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.278 21:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:22.278 21:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:22.278 21:41:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:19:22.278 21:41:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:24.228 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:24.228 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:24.228 Found net devices under 0000:08:00.0: cvl_0_0 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:24.228 Found net devices under 0000:08:00.1: cvl_0_1 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:24.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:19:24.228 00:19:24.228 --- 10.0.0.2 ping statistics --- 00:19:24.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.228 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:24.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:19:24.228 00:19:24.228 --- 10.0.0.1 ping statistics --- 00:19:24.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.228 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=378922 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 378922 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 378922 ']' 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:24.228 21:41:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:24.228 [2024-07-15 21:41:14.870208] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:19:24.228 [2024-07-15 21:41:14.870296] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.228 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.228 [2024-07-15 21:41:14.935291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.485 [2024-07-15 21:41:15.050841] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.485 [2024-07-15 21:41:15.050898] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.485 [2024-07-15 21:41:15.050914] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.485 [2024-07-15 21:41:15.050927] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.485 [2024-07-15 21:41:15.050938] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.485 [2024-07-15 21:41:15.050967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.438 21:41:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.438 21:41:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:25.438 21:41:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:25.438 21:41:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:25.438 21:41:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:25.438 21:41:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.438 21:41:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:25.438 21:41:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:25.438 21:41:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:25.438 21:41:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:25.438 21:41:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:25.438 21:41:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:25.438 21:41:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:25.438 21:41:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:25.438 [2024-07-15 21:41:16.092145] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.438 [2024-07-15 21:41:16.108142] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:25.438 [2024-07-15 21:41:16.108325] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.438 [2024-07-15 21:41:16.137332] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:25.438 malloc0 00:19:25.438 21:41:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:25.438 21:41:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=379045 00:19:25.438 21:41:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:25.438 21:41:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 379045 /var/tmp/bdevperf.sock 00:19:25.438 21:41:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 379045 ']' 00:19:25.438 21:41:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.438 21:41:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:25.438 21:41:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.438 21:41:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:25.438 21:41:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:25.438 [2024-07-15 21:41:16.228171] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:19:25.438 [2024-07-15 21:41:16.228280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid379045 ] 00:19:25.696 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.696 [2024-07-15 21:41:16.276011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.696 [2024-07-15 21:41:16.375348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.696 21:41:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.696 21:41:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:25.696 21:41:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:25.953 [2024-07-15 21:41:16.684332] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:25.953 [2024-07-15 21:41:16.684452] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:26.211 TLSTESTn1 00:19:26.211 21:41:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:26.211 Running I/O for 10 seconds... 00:19:36.196 00:19:36.196 Latency(us) 00:19:36.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.196 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:36.196 Verification LBA range: start 0x0 length 0x2000 00:19:36.196 TLSTESTn1 : 10.02 3472.29 13.56 0.00 0.00 36793.92 6602.15 38641.97 00:19:36.196 =================================================================================================================== 00:19:36.196 Total : 3472.29 13.56 0.00 0.00 36793.92 6602.15 38641.97 00:19:36.196 0 00:19:36.196 21:41:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:36.196 21:41:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:36.196 21:41:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:19:36.196 21:41:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:19:36.196 21:41:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:36.196 21:41:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:36.196 21:41:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:36.196 21:41:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:36.196 21:41:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:36.196 21:41:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:36.196 nvmf_trace.0 00:19:36.454 21:41:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:19:36.454 21:41:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 379045 00:19:36.454 21:41:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 379045 ']' 00:19:36.454 21:41:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 379045 00:19:36.454 21:41:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:36.454 21:41:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:36.454 21:41:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 379045 00:19:36.454 21:41:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:36.454 21:41:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:36.454 21:41:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 379045' 00:19:36.454 killing process with pid 379045 00:19:36.454 21:41:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 379045 00:19:36.454 Received shutdown signal, test time was about 10.000000 seconds 00:19:36.454 00:19:36.454 Latency(us) 00:19:36.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.454 =================================================================================================================== 00:19:36.454 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:36.454 [2024-07-15 21:41:27.017851] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:36.454 21:41:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 379045 00:19:36.454 21:41:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:36.454 21:41:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:36.454 21:41:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:19:36.454 21:41:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:36.454 21:41:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:19:36.454 21:41:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:36.454 21:41:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:36.454 rmmod nvme_tcp 00:19:36.454 rmmod nvme_fabrics 00:19:36.712 rmmod nvme_keyring 00:19:36.712 21:41:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:36.712 21:41:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:19:36.712 21:41:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:19:36.712 21:41:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 378922 ']' 00:19:36.712 21:41:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 378922 00:19:36.712 21:41:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 378922 ']' 00:19:36.712 21:41:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 378922 00:19:36.712 21:41:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:36.712 21:41:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:36.712 21:41:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 378922 00:19:36.712 21:41:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:36.712 21:41:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:36.712 21:41:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 378922' 00:19:36.712 killing process with pid 378922 00:19:36.712 21:41:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 378922 00:19:36.712 [2024-07-15 21:41:27.313376] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:36.712 21:41:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 378922 00:19:36.972 21:41:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:36.972 21:41:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:36.972 21:41:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:36.973 21:41:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:36.973 21:41:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:36.973 21:41:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.973 21:41:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:36.973 21:41:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.877 21:41:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:38.877 21:41:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:38.877 00:19:38.877 real 0m16.847s 00:19:38.877 user 0m22.266s 00:19:38.877 sys 0m4.975s 00:19:38.877 21:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:38.877 21:41:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:38.878 ************************************ 00:19:38.878 END TEST nvmf_fips 00:19:38.878 ************************************ 00:19:38.878 21:41:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:38.878 21:41:29 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:19:38.878 21:41:29 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:19:38.878 21:41:29 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:19:38.878 21:41:29 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:19:38.878 21:41:29 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:19:38.878 21:41:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:40.788 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:40.788 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:40.788 Found net devices under 0000:08:00.0: cvl_0_0 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:40.788 Found net devices under 0000:08:00.1: cvl_0_1 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.788 21:41:31 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:19:40.789 21:41:31 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:40.789 21:41:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:40.789 21:41:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:40.789 21:41:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:40.789 ************************************ 00:19:40.789 START TEST nvmf_perf_adq 00:19:40.789 ************************************ 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:40.789 * Looking for test storage... 00:19:40.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:40.789 21:41:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:42.690 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:42.690 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:42.690 Found net devices under 0000:08:00.0: cvl_0_0 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:42.690 Found net devices under 0000:08:00.1: cvl_0_1 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:42.690 21:41:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:42.949 21:41:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:46.234 21:41:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:51.516 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:51.516 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:51.517 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:51.517 Found net devices under 0000:08:00.0: cvl_0_0 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:51.517 Found net devices under 0000:08:00.1: cvl_0_1 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:51.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:19:51.517 00:19:51.517 --- 10.0.0.2 ping statistics --- 00:19:51.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.517 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:51.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:19:51.517 00:19:51.517 --- 10.0.0.1 ping statistics --- 00:19:51.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.517 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=383540 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 383540 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 383540 ']' 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:51.517 21:41:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.517 [2024-07-15 21:41:41.813493] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:19:51.517 [2024-07-15 21:41:41.813601] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.517 EAL: No free 2048 kB hugepages reported on node 1 00:19:51.517 [2024-07-15 21:41:41.880625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:51.517 [2024-07-15 21:41:41.999077] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.517 [2024-07-15 21:41:41.999133] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.517 [2024-07-15 21:41:41.999173] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.517 [2024-07-15 21:41:41.999199] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.517 [2024-07-15 21:41:41.999218] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.517 [2024-07-15 21:41:41.999312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.517 [2024-07-15 21:41:41.999377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.517 [2024-07-15 21:41:41.999444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:51.517 [2024-07-15 21:41:41.999453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.517 [2024-07-15 21:41:42.225671] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:51.517 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.518 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.518 Malloc1 00:19:51.518 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.518 21:41:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:51.518 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.518 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.518 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.518 21:41:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:51.518 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.518 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.518 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.518 21:41:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:51.518 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.518 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.518 [2024-07-15 21:41:42.274441] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.518 21:41:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.518 21:41:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=383658 00:19:51.518 21:41:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:51.518 21:41:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:51.776 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.675 21:41:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:53.675 21:41:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.675 21:41:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.675 21:41:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.675 21:41:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:53.675 "tick_rate": 2700000000, 00:19:53.675 "poll_groups": [ 00:19:53.675 { 00:19:53.675 "name": "nvmf_tgt_poll_group_000", 00:19:53.675 "admin_qpairs": 1, 00:19:53.675 "io_qpairs": 1, 00:19:53.675 "current_admin_qpairs": 1, 00:19:53.675 "current_io_qpairs": 1, 00:19:53.675 "pending_bdev_io": 0, 00:19:53.675 "completed_nvme_io": 20956, 00:19:53.675 "transports": [ 00:19:53.675 { 00:19:53.675 "trtype": "TCP" 00:19:53.675 } 00:19:53.675 ] 00:19:53.675 }, 00:19:53.675 { 00:19:53.675 "name": "nvmf_tgt_poll_group_001", 00:19:53.675 "admin_qpairs": 0, 00:19:53.675 "io_qpairs": 1, 00:19:53.675 "current_admin_qpairs": 0, 00:19:53.675 "current_io_qpairs": 1, 00:19:53.675 "pending_bdev_io": 0, 00:19:53.675 "completed_nvme_io": 21886, 00:19:53.675 "transports": [ 00:19:53.675 { 00:19:53.675 "trtype": "TCP" 00:19:53.675 } 00:19:53.675 ] 00:19:53.675 }, 00:19:53.675 { 00:19:53.675 "name": "nvmf_tgt_poll_group_002", 00:19:53.675 "admin_qpairs": 0, 00:19:53.675 "io_qpairs": 1, 00:19:53.675 "current_admin_qpairs": 0, 00:19:53.675 "current_io_qpairs": 1, 00:19:53.675 "pending_bdev_io": 0, 00:19:53.675 "completed_nvme_io": 21470, 00:19:53.675 "transports": [ 00:19:53.675 { 00:19:53.675 "trtype": "TCP" 00:19:53.675 } 00:19:53.675 ] 00:19:53.675 }, 00:19:53.675 { 00:19:53.675 "name": "nvmf_tgt_poll_group_003", 00:19:53.675 "admin_qpairs": 0, 00:19:53.675 "io_qpairs": 1, 00:19:53.675 "current_admin_qpairs": 0, 00:19:53.675 "current_io_qpairs": 1, 00:19:53.675 "pending_bdev_io": 0, 00:19:53.675 "completed_nvme_io": 21667, 00:19:53.675 "transports": [ 00:19:53.675 { 00:19:53.675 "trtype": "TCP" 00:19:53.675 } 00:19:53.675 ] 00:19:53.675 } 00:19:53.675 ] 00:19:53.675 }' 00:19:53.675 21:41:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:53.676 21:41:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:53.676 21:41:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:53.676 21:41:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:53.676 21:41:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 383658 00:20:01.805 Initializing NVMe Controllers 00:20:01.805 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:01.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:01.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:01.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:01.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:01.805 Initialization complete. Launching workers. 00:20:01.805 ======================================================== 00:20:01.805 Latency(us) 00:20:01.805 Device Information : IOPS MiB/s Average min max 00:20:01.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11297.40 44.13 5665.35 2421.27 10985.04 00:20:01.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11662.90 45.56 5487.94 2112.42 14624.10 00:20:01.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11442.50 44.70 5593.80 2593.36 14444.02 00:20:01.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11044.70 43.14 5795.43 2237.00 9724.84 00:20:01.805 ======================================================== 00:20:01.805 Total : 45447.50 177.53 5633.42 2112.42 14624.10 00:20:01.805 00:20:01.805 21:41:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:01.805 21:41:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:01.805 21:41:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:01.805 21:41:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:01.805 21:41:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:01.805 21:41:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:01.805 21:41:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:01.805 rmmod nvme_tcp 00:20:01.805 rmmod nvme_fabrics 00:20:01.805 rmmod nvme_keyring 00:20:01.805 21:41:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:01.805 21:41:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:01.805 21:41:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:01.805 21:41:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 383540 ']' 00:20:01.805 21:41:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 383540 00:20:01.805 21:41:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 383540 ']' 00:20:01.805 21:41:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 383540 00:20:01.805 21:41:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:01.805 21:41:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:01.805 21:41:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 383540 00:20:01.805 21:41:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:01.805 21:41:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:01.805 21:41:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 383540' 00:20:01.805 killing process with pid 383540 00:20:01.805 21:41:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 383540 00:20:01.805 21:41:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 383540 00:20:02.062 21:41:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:02.062 21:41:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:02.062 21:41:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:02.062 21:41:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:02.062 21:41:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:02.062 21:41:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.062 21:41:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.062 21:41:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.588 21:41:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:04.588 21:41:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:04.588 21:41:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:04.588 21:41:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:07.116 21:41:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:12.387 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:12.387 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:12.387 Found net devices under 0000:08:00.0: cvl_0_0 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.387 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:12.388 Found net devices under 0000:08:00.1: cvl_0_1 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:12.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:12.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:20:12.388 00:20:12.388 --- 10.0.0.2 ping statistics --- 00:20:12.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.388 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:12.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:12.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:20:12.388 00:20:12.388 --- 10.0.0.1 ping statistics --- 00:20:12.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.388 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:12.388 net.core.busy_poll = 1 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:12.388 net.core.busy_read = 1 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=385800 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 385800 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 385800 ']' 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.388 [2024-07-15 21:42:02.694495] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:20:12.388 [2024-07-15 21:42:02.694583] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.388 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.388 [2024-07-15 21:42:02.760208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:12.388 [2024-07-15 21:42:02.878655] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.388 [2024-07-15 21:42:02.878710] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.388 [2024-07-15 21:42:02.878727] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.388 [2024-07-15 21:42:02.878741] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.388 [2024-07-15 21:42:02.878753] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.388 [2024-07-15 21:42:02.879343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.388 [2024-07-15 21:42:02.879415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.388 [2024-07-15 21:42:02.879516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:12.388 [2024-07-15 21:42:02.879526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.388 21:42:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.388 21:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.388 21:42:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:12.388 21:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.389 [2024-07-15 21:42:03.122844] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.389 Malloc1 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.389 [2024-07-15 21:42:03.173091] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=385914 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:20:12.389 21:42:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:12.645 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.544 21:42:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:20:14.544 21:42:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.544 21:42:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.544 21:42:05 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.544 21:42:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:20:14.544 "tick_rate": 2700000000, 00:20:14.544 "poll_groups": [ 00:20:14.544 { 00:20:14.544 "name": "nvmf_tgt_poll_group_000", 00:20:14.544 "admin_qpairs": 1, 00:20:14.544 "io_qpairs": 2, 00:20:14.544 "current_admin_qpairs": 1, 00:20:14.544 "current_io_qpairs": 2, 00:20:14.544 "pending_bdev_io": 0, 00:20:14.544 "completed_nvme_io": 26641, 00:20:14.544 "transports": [ 00:20:14.544 { 00:20:14.544 "trtype": "TCP" 00:20:14.544 } 00:20:14.544 ] 00:20:14.544 }, 00:20:14.544 { 00:20:14.544 "name": "nvmf_tgt_poll_group_001", 00:20:14.544 "admin_qpairs": 0, 00:20:14.544 "io_qpairs": 2, 00:20:14.544 "current_admin_qpairs": 0, 00:20:14.544 "current_io_qpairs": 2, 00:20:14.544 "pending_bdev_io": 0, 00:20:14.544 "completed_nvme_io": 27559, 00:20:14.544 "transports": [ 00:20:14.544 { 00:20:14.544 "trtype": "TCP" 00:20:14.544 } 00:20:14.544 ] 00:20:14.544 }, 00:20:14.544 { 00:20:14.544 "name": "nvmf_tgt_poll_group_002", 00:20:14.544 "admin_qpairs": 0, 00:20:14.544 "io_qpairs": 0, 00:20:14.544 "current_admin_qpairs": 0, 00:20:14.544 "current_io_qpairs": 0, 00:20:14.544 "pending_bdev_io": 0, 00:20:14.544 "completed_nvme_io": 0, 00:20:14.544 "transports": [ 00:20:14.544 { 00:20:14.544 "trtype": "TCP" 00:20:14.544 } 00:20:14.544 ] 00:20:14.544 }, 00:20:14.544 { 00:20:14.544 "name": "nvmf_tgt_poll_group_003", 00:20:14.544 "admin_qpairs": 0, 00:20:14.544 "io_qpairs": 0, 00:20:14.544 "current_admin_qpairs": 0, 00:20:14.544 "current_io_qpairs": 0, 00:20:14.544 "pending_bdev_io": 0, 00:20:14.544 "completed_nvme_io": 0, 00:20:14.544 "transports": [ 00:20:14.544 { 00:20:14.544 "trtype": "TCP" 00:20:14.544 } 00:20:14.544 ] 00:20:14.544 } 00:20:14.544 ] 00:20:14.544 }' 00:20:14.544 21:42:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:14.544 21:42:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:20:14.544 21:42:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:20:14.544 21:42:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:20:14.544 21:42:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 385914 00:20:22.650 Initializing NVMe Controllers 00:20:22.650 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:22.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:22.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:22.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:22.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:22.650 Initialization complete. Launching workers. 00:20:22.650 ======================================================== 00:20:22.650 Latency(us) 00:20:22.650 Device Information : IOPS MiB/s Average min max 00:20:22.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6947.11 27.14 9212.33 1696.24 55362.96 00:20:22.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6860.01 26.80 9333.56 1737.94 55462.09 00:20:22.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7288.20 28.47 8805.14 1820.16 53857.87 00:20:22.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7346.80 28.70 8712.55 1754.83 60559.87 00:20:22.651 ======================================================== 00:20:22.651 Total : 28442.12 111.10 9008.13 1696.24 60559.87 00:20:22.651 00:20:22.651 21:42:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:20:22.651 21:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:22.651 21:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:22.651 21:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:22.651 21:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:22.651 21:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:22.651 21:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:22.651 rmmod nvme_tcp 00:20:22.651 rmmod nvme_fabrics 00:20:22.651 rmmod nvme_keyring 00:20:22.651 21:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:22.651 21:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:22.651 21:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:22.651 21:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 385800 ']' 00:20:22.651 21:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 385800 00:20:22.651 21:42:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 385800 ']' 00:20:22.651 21:42:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 385800 00:20:22.651 21:42:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:22.651 21:42:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:22.651 21:42:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 385800 00:20:22.651 21:42:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:22.651 21:42:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:22.651 21:42:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 385800' 00:20:22.651 killing process with pid 385800 00:20:22.651 21:42:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 385800 00:20:22.651 21:42:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 385800 00:20:22.909 21:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:22.909 21:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:22.909 21:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:22.909 21:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:22.909 21:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:22.909 21:42:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.909 21:42:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.909 21:42:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.193 21:42:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:26.193 21:42:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:26.193 00:20:26.193 real 0m45.445s 00:20:26.193 user 2m37.523s 00:20:26.193 sys 0m11.325s 00:20:26.193 21:42:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:26.193 21:42:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.193 ************************************ 00:20:26.193 END TEST nvmf_perf_adq 00:20:26.193 ************************************ 00:20:26.193 21:42:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:26.193 21:42:16 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:26.193 21:42:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:26.193 21:42:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:26.193 21:42:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:26.193 ************************************ 00:20:26.193 START TEST nvmf_shutdown 00:20:26.193 ************************************ 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:26.193 * Looking for test storage... 00:20:26.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:26.193 ************************************ 00:20:26.193 START TEST nvmf_shutdown_tc1 00:20:26.193 ************************************ 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:26.193 21:42:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:28.098 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.098 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:28.098 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:28.098 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:28.098 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:28.098 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:28.098 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:28.098 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:20:28.098 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:28.098 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:20:28.098 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:20:28.098 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:20:28.098 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:20:28.098 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:20:28.098 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:28.098 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.098 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.098 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.098 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.098 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.098 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.098 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.098 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:28.099 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:28.099 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:28.099 Found net devices under 0000:08:00.0: cvl_0_0 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:28.099 Found net devices under 0000:08:00.1: cvl_0_1 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:28.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:20:28.099 00:20:28.099 --- 10.0.0.2 ping statistics --- 00:20:28.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.099 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:20:28.099 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:28.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:20:28.099 00:20:28.099 --- 10.0.0.1 ping statistics --- 00:20:28.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.099 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:20:28.100 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.100 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:20:28.100 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:28.100 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.100 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:28.100 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:28.100 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.100 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:28.100 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:28.100 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:28.100 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:28.100 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:28.100 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:28.100 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=388946 00:20:28.100 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:28.100 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 388946 00:20:28.100 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 388946 ']' 00:20:28.100 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.100 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:28.100 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.100 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:28.100 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:28.100 [2024-07-15 21:42:18.680122] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:20:28.100 [2024-07-15 21:42:18.680221] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.100 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.100 [2024-07-15 21:42:18.744416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:28.100 [2024-07-15 21:42:18.861195] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.100 [2024-07-15 21:42:18.861249] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.100 [2024-07-15 21:42:18.861267] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.100 [2024-07-15 21:42:18.861280] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.100 [2024-07-15 21:42:18.861293] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.100 [2024-07-15 21:42:18.861349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.100 [2024-07-15 21:42:18.861431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:28.100 [2024-07-15 21:42:18.861503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:28.100 [2024-07-15 21:42:18.861506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.358 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:28.358 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:20:28.358 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:28.358 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:28.358 21:42:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:28.358 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.358 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:28.358 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.358 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:28.358 [2024-07-15 21:42:19.025956] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.358 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.358 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.359 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:28.359 Malloc1 00:20:28.359 [2024-07-15 21:42:19.115861] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.359 Malloc2 00:20:28.617 Malloc3 00:20:28.617 Malloc4 00:20:28.617 Malloc5 00:20:28.617 Malloc6 00:20:28.617 Malloc7 00:20:28.875 Malloc8 00:20:28.875 Malloc9 00:20:28.875 Malloc10 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=389025 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 389025 /var/tmp/bdevperf.sock 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 389025 ']' 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:28.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:28.875 { 00:20:28.875 "params": { 00:20:28.875 "name": "Nvme$subsystem", 00:20:28.875 "trtype": "$TEST_TRANSPORT", 00:20:28.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.875 "adrfam": "ipv4", 00:20:28.875 "trsvcid": "$NVMF_PORT", 00:20:28.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.875 "hdgst": ${hdgst:-false}, 00:20:28.875 "ddgst": ${ddgst:-false} 00:20:28.875 }, 00:20:28.875 "method": "bdev_nvme_attach_controller" 00:20:28.875 } 00:20:28.875 EOF 00:20:28.875 )") 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:28.875 { 00:20:28.875 "params": { 00:20:28.875 "name": "Nvme$subsystem", 00:20:28.875 "trtype": "$TEST_TRANSPORT", 00:20:28.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.875 "adrfam": "ipv4", 00:20:28.875 "trsvcid": "$NVMF_PORT", 00:20:28.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.875 "hdgst": ${hdgst:-false}, 00:20:28.875 "ddgst": ${ddgst:-false} 00:20:28.875 }, 00:20:28.875 "method": "bdev_nvme_attach_controller" 00:20:28.875 } 00:20:28.875 EOF 00:20:28.875 )") 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:28.875 { 00:20:28.875 "params": { 00:20:28.875 "name": "Nvme$subsystem", 00:20:28.875 "trtype": "$TEST_TRANSPORT", 00:20:28.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.875 "adrfam": "ipv4", 00:20:28.875 "trsvcid": "$NVMF_PORT", 00:20:28.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.875 "hdgst": ${hdgst:-false}, 00:20:28.875 "ddgst": ${ddgst:-false} 00:20:28.875 }, 00:20:28.875 "method": "bdev_nvme_attach_controller" 00:20:28.875 } 00:20:28.875 EOF 00:20:28.875 )") 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:28.875 { 00:20:28.875 "params": { 00:20:28.875 "name": "Nvme$subsystem", 00:20:28.875 "trtype": "$TEST_TRANSPORT", 00:20:28.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.875 "adrfam": "ipv4", 00:20:28.875 "trsvcid": "$NVMF_PORT", 00:20:28.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.875 "hdgst": ${hdgst:-false}, 00:20:28.875 "ddgst": ${ddgst:-false} 00:20:28.875 }, 00:20:28.875 "method": "bdev_nvme_attach_controller" 00:20:28.875 } 00:20:28.875 EOF 00:20:28.875 )") 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:28.875 { 00:20:28.875 "params": { 00:20:28.875 "name": "Nvme$subsystem", 00:20:28.875 "trtype": "$TEST_TRANSPORT", 00:20:28.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.875 "adrfam": "ipv4", 00:20:28.875 "trsvcid": "$NVMF_PORT", 00:20:28.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.875 "hdgst": ${hdgst:-false}, 00:20:28.875 "ddgst": ${ddgst:-false} 00:20:28.875 }, 00:20:28.875 "method": "bdev_nvme_attach_controller" 00:20:28.875 } 00:20:28.875 EOF 00:20:28.875 )") 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:28.875 { 00:20:28.875 "params": { 00:20:28.875 "name": "Nvme$subsystem", 00:20:28.875 "trtype": "$TEST_TRANSPORT", 00:20:28.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.875 "adrfam": "ipv4", 00:20:28.875 "trsvcid": "$NVMF_PORT", 00:20:28.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.875 "hdgst": ${hdgst:-false}, 00:20:28.875 "ddgst": ${ddgst:-false} 00:20:28.875 }, 00:20:28.875 "method": "bdev_nvme_attach_controller" 00:20:28.875 } 00:20:28.875 EOF 00:20:28.875 )") 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:28.875 { 00:20:28.875 "params": { 00:20:28.875 "name": "Nvme$subsystem", 00:20:28.875 "trtype": "$TEST_TRANSPORT", 00:20:28.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.875 "adrfam": "ipv4", 00:20:28.875 "trsvcid": "$NVMF_PORT", 00:20:28.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.875 "hdgst": ${hdgst:-false}, 00:20:28.875 "ddgst": ${ddgst:-false} 00:20:28.875 }, 00:20:28.875 "method": "bdev_nvme_attach_controller" 00:20:28.875 } 00:20:28.875 EOF 00:20:28.875 )") 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:28.875 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:28.875 { 00:20:28.875 "params": { 00:20:28.875 "name": "Nvme$subsystem", 00:20:28.876 "trtype": "$TEST_TRANSPORT", 00:20:28.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.876 "adrfam": "ipv4", 00:20:28.876 "trsvcid": "$NVMF_PORT", 00:20:28.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.876 "hdgst": ${hdgst:-false}, 00:20:28.876 "ddgst": ${ddgst:-false} 00:20:28.876 }, 00:20:28.876 "method": "bdev_nvme_attach_controller" 00:20:28.876 } 00:20:28.876 EOF 00:20:28.876 )") 00:20:28.876 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:28.876 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:28.876 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:28.876 { 00:20:28.876 "params": { 00:20:28.876 "name": "Nvme$subsystem", 00:20:28.876 "trtype": "$TEST_TRANSPORT", 00:20:28.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.876 "adrfam": "ipv4", 00:20:28.876 "trsvcid": "$NVMF_PORT", 00:20:28.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.876 "hdgst": ${hdgst:-false}, 00:20:28.876 "ddgst": ${ddgst:-false} 00:20:28.876 }, 00:20:28.876 "method": "bdev_nvme_attach_controller" 00:20:28.876 } 00:20:28.876 EOF 00:20:28.876 )") 00:20:28.876 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:28.876 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:28.876 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:28.876 { 00:20:28.876 "params": { 00:20:28.876 "name": "Nvme$subsystem", 00:20:28.876 "trtype": "$TEST_TRANSPORT", 00:20:28.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.876 "adrfam": "ipv4", 00:20:28.876 "trsvcid": "$NVMF_PORT", 00:20:28.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.876 "hdgst": ${hdgst:-false}, 00:20:28.876 "ddgst": ${ddgst:-false} 00:20:28.876 }, 00:20:28.876 "method": "bdev_nvme_attach_controller" 00:20:28.876 } 00:20:28.876 EOF 00:20:28.876 )") 00:20:28.876 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:28.876 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:28.876 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:28.876 21:42:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:28.876 "params": { 00:20:28.876 "name": "Nvme1", 00:20:28.876 "trtype": "tcp", 00:20:28.876 "traddr": "10.0.0.2", 00:20:28.876 "adrfam": "ipv4", 00:20:28.876 "trsvcid": "4420", 00:20:28.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.876 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:28.876 "hdgst": false, 00:20:28.876 "ddgst": false 00:20:28.876 }, 00:20:28.876 "method": "bdev_nvme_attach_controller" 00:20:28.876 },{ 00:20:28.876 "params": { 00:20:28.876 "name": "Nvme2", 00:20:28.876 "trtype": "tcp", 00:20:28.876 "traddr": "10.0.0.2", 00:20:28.876 "adrfam": "ipv4", 00:20:28.876 "trsvcid": "4420", 00:20:28.876 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:28.876 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:28.876 "hdgst": false, 00:20:28.876 "ddgst": false 00:20:28.876 }, 00:20:28.876 "method": "bdev_nvme_attach_controller" 00:20:28.876 },{ 00:20:28.876 "params": { 00:20:28.876 "name": "Nvme3", 00:20:28.876 "trtype": "tcp", 00:20:28.876 "traddr": "10.0.0.2", 00:20:28.876 "adrfam": "ipv4", 00:20:28.876 "trsvcid": "4420", 00:20:28.876 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:28.876 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:28.876 "hdgst": false, 00:20:28.876 "ddgst": false 00:20:28.876 }, 00:20:28.876 "method": "bdev_nvme_attach_controller" 00:20:28.876 },{ 00:20:28.876 "params": { 00:20:28.876 "name": "Nvme4", 00:20:28.876 "trtype": "tcp", 00:20:28.876 "traddr": "10.0.0.2", 00:20:28.876 "adrfam": "ipv4", 00:20:28.876 "trsvcid": "4420", 00:20:28.876 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:28.876 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:28.876 "hdgst": false, 00:20:28.876 "ddgst": false 00:20:28.876 }, 00:20:28.876 "method": "bdev_nvme_attach_controller" 00:20:28.876 },{ 00:20:28.876 "params": { 00:20:28.876 "name": "Nvme5", 00:20:28.876 "trtype": "tcp", 00:20:28.876 "traddr": "10.0.0.2", 00:20:28.876 "adrfam": "ipv4", 00:20:28.876 "trsvcid": "4420", 00:20:28.876 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:28.876 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:28.876 "hdgst": false, 00:20:28.876 "ddgst": false 00:20:28.876 }, 00:20:28.876 "method": "bdev_nvme_attach_controller" 00:20:28.876 },{ 00:20:28.876 "params": { 00:20:28.876 "name": "Nvme6", 00:20:28.876 "trtype": "tcp", 00:20:28.876 "traddr": "10.0.0.2", 00:20:28.876 "adrfam": "ipv4", 00:20:28.876 "trsvcid": "4420", 00:20:28.876 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:28.876 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:28.876 "hdgst": false, 00:20:28.876 "ddgst": false 00:20:28.876 }, 00:20:28.876 "method": "bdev_nvme_attach_controller" 00:20:28.876 },{ 00:20:28.876 "params": { 00:20:28.876 "name": "Nvme7", 00:20:28.876 "trtype": "tcp", 00:20:28.876 "traddr": "10.0.0.2", 00:20:28.876 "adrfam": "ipv4", 00:20:28.876 "trsvcid": "4420", 00:20:28.876 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:28.876 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:28.876 "hdgst": false, 00:20:28.876 "ddgst": false 00:20:28.876 }, 00:20:28.876 "method": "bdev_nvme_attach_controller" 00:20:28.876 },{ 00:20:28.876 "params": { 00:20:28.876 "name": "Nvme8", 00:20:28.876 "trtype": "tcp", 00:20:28.876 "traddr": "10.0.0.2", 00:20:28.876 "adrfam": "ipv4", 00:20:28.876 "trsvcid": "4420", 00:20:28.876 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:28.876 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:28.876 "hdgst": false, 00:20:28.876 "ddgst": false 00:20:28.876 }, 00:20:28.876 "method": "bdev_nvme_attach_controller" 00:20:28.876 },{ 00:20:28.876 "params": { 00:20:28.876 "name": "Nvme9", 00:20:28.876 "trtype": "tcp", 00:20:28.876 "traddr": "10.0.0.2", 00:20:28.876 "adrfam": "ipv4", 00:20:28.876 "trsvcid": "4420", 00:20:28.876 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:28.876 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:28.876 "hdgst": false, 00:20:28.876 "ddgst": false 00:20:28.876 }, 00:20:28.876 "method": "bdev_nvme_attach_controller" 00:20:28.876 },{ 00:20:28.876 "params": { 00:20:28.876 "name": "Nvme10", 00:20:28.876 "trtype": "tcp", 00:20:28.876 "traddr": "10.0.0.2", 00:20:28.876 "adrfam": "ipv4", 00:20:28.876 "trsvcid": "4420", 00:20:28.876 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:28.876 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:28.876 "hdgst": false, 00:20:28.876 "ddgst": false 00:20:28.876 }, 00:20:28.876 "method": "bdev_nvme_attach_controller" 00:20:28.876 }' 00:20:28.876 [2024-07-15 21:42:19.591059] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:20:28.876 [2024-07-15 21:42:19.591155] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:28.876 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.876 [2024-07-15 21:42:19.650829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.134 [2024-07-15 21:42:19.751135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.026 21:42:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:31.027 21:42:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:20:31.027 21:42:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:31.027 21:42:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.027 21:42:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:31.027 21:42:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.027 21:42:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 389025 00:20:31.027 21:42:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:31.027 21:42:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:20:32.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 389025 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 388946 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:32.015 { 00:20:32.015 "params": { 00:20:32.015 "name": "Nvme$subsystem", 00:20:32.015 "trtype": "$TEST_TRANSPORT", 00:20:32.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.015 "adrfam": "ipv4", 00:20:32.015 "trsvcid": "$NVMF_PORT", 00:20:32.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.015 "hdgst": ${hdgst:-false}, 00:20:32.015 "ddgst": ${ddgst:-false} 00:20:32.015 }, 00:20:32.015 "method": "bdev_nvme_attach_controller" 00:20:32.015 } 00:20:32.015 EOF 00:20:32.015 )") 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:32.015 { 00:20:32.015 "params": { 00:20:32.015 "name": "Nvme$subsystem", 00:20:32.015 "trtype": "$TEST_TRANSPORT", 00:20:32.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.015 "adrfam": "ipv4", 00:20:32.015 "trsvcid": "$NVMF_PORT", 00:20:32.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.015 "hdgst": ${hdgst:-false}, 00:20:32.015 "ddgst": ${ddgst:-false} 00:20:32.015 }, 00:20:32.015 "method": "bdev_nvme_attach_controller" 00:20:32.015 } 00:20:32.015 EOF 00:20:32.015 )") 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:32.015 { 00:20:32.015 "params": { 00:20:32.015 "name": "Nvme$subsystem", 00:20:32.015 "trtype": "$TEST_TRANSPORT", 00:20:32.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.015 "adrfam": "ipv4", 00:20:32.015 "trsvcid": "$NVMF_PORT", 00:20:32.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.015 "hdgst": ${hdgst:-false}, 00:20:32.015 "ddgst": ${ddgst:-false} 00:20:32.015 }, 00:20:32.015 "method": "bdev_nvme_attach_controller" 00:20:32.015 } 00:20:32.015 EOF 00:20:32.015 )") 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:32.015 { 00:20:32.015 "params": { 00:20:32.015 "name": "Nvme$subsystem", 00:20:32.015 "trtype": "$TEST_TRANSPORT", 00:20:32.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.015 "adrfam": "ipv4", 00:20:32.015 "trsvcid": "$NVMF_PORT", 00:20:32.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.015 "hdgst": ${hdgst:-false}, 00:20:32.015 "ddgst": ${ddgst:-false} 00:20:32.015 }, 00:20:32.015 "method": "bdev_nvme_attach_controller" 00:20:32.015 } 00:20:32.015 EOF 00:20:32.015 )") 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:32.015 { 00:20:32.015 "params": { 00:20:32.015 "name": "Nvme$subsystem", 00:20:32.015 "trtype": "$TEST_TRANSPORT", 00:20:32.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.015 "adrfam": "ipv4", 00:20:32.015 "trsvcid": "$NVMF_PORT", 00:20:32.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.015 "hdgst": ${hdgst:-false}, 00:20:32.015 "ddgst": ${ddgst:-false} 00:20:32.015 }, 00:20:32.015 "method": "bdev_nvme_attach_controller" 00:20:32.015 } 00:20:32.015 EOF 00:20:32.015 )") 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:32.015 { 00:20:32.015 "params": { 00:20:32.015 "name": "Nvme$subsystem", 00:20:32.015 "trtype": "$TEST_TRANSPORT", 00:20:32.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.015 "adrfam": "ipv4", 00:20:32.015 "trsvcid": "$NVMF_PORT", 00:20:32.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.015 "hdgst": ${hdgst:-false}, 00:20:32.015 "ddgst": ${ddgst:-false} 00:20:32.015 }, 00:20:32.015 "method": "bdev_nvme_attach_controller" 00:20:32.015 } 00:20:32.015 EOF 00:20:32.015 )") 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:32.015 { 00:20:32.015 "params": { 00:20:32.015 "name": "Nvme$subsystem", 00:20:32.015 "trtype": "$TEST_TRANSPORT", 00:20:32.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.015 "adrfam": "ipv4", 00:20:32.015 "trsvcid": "$NVMF_PORT", 00:20:32.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.015 "hdgst": ${hdgst:-false}, 00:20:32.015 "ddgst": ${ddgst:-false} 00:20:32.015 }, 00:20:32.015 "method": "bdev_nvme_attach_controller" 00:20:32.015 } 00:20:32.015 EOF 00:20:32.015 )") 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:32.015 { 00:20:32.015 "params": { 00:20:32.015 "name": "Nvme$subsystem", 00:20:32.015 "trtype": "$TEST_TRANSPORT", 00:20:32.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.015 "adrfam": "ipv4", 00:20:32.015 "trsvcid": "$NVMF_PORT", 00:20:32.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.015 "hdgst": ${hdgst:-false}, 00:20:32.015 "ddgst": ${ddgst:-false} 00:20:32.015 }, 00:20:32.015 "method": "bdev_nvme_attach_controller" 00:20:32.015 } 00:20:32.015 EOF 00:20:32.015 )") 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:32.015 { 00:20:32.015 "params": { 00:20:32.015 "name": "Nvme$subsystem", 00:20:32.015 "trtype": "$TEST_TRANSPORT", 00:20:32.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.015 "adrfam": "ipv4", 00:20:32.015 "trsvcid": "$NVMF_PORT", 00:20:32.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.015 "hdgst": ${hdgst:-false}, 00:20:32.015 "ddgst": ${ddgst:-false} 00:20:32.015 }, 00:20:32.015 "method": "bdev_nvme_attach_controller" 00:20:32.015 } 00:20:32.015 EOF 00:20:32.015 )") 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:32.015 { 00:20:32.015 "params": { 00:20:32.015 "name": "Nvme$subsystem", 00:20:32.015 "trtype": "$TEST_TRANSPORT", 00:20:32.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:32.015 "adrfam": "ipv4", 00:20:32.015 "trsvcid": "$NVMF_PORT", 00:20:32.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:32.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:32.015 "hdgst": ${hdgst:-false}, 00:20:32.015 "ddgst": ${ddgst:-false} 00:20:32.015 }, 00:20:32.015 "method": "bdev_nvme_attach_controller" 00:20:32.015 } 00:20:32.015 EOF 00:20:32.015 )") 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:32.015 21:42:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:32.015 "params": { 00:20:32.015 "name": "Nvme1", 00:20:32.015 "trtype": "tcp", 00:20:32.015 "traddr": "10.0.0.2", 00:20:32.015 "adrfam": "ipv4", 00:20:32.015 "trsvcid": "4420", 00:20:32.015 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.015 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:32.015 "hdgst": false, 00:20:32.015 "ddgst": false 00:20:32.015 }, 00:20:32.015 "method": "bdev_nvme_attach_controller" 00:20:32.016 },{ 00:20:32.016 "params": { 00:20:32.016 "name": "Nvme2", 00:20:32.016 "trtype": "tcp", 00:20:32.016 "traddr": "10.0.0.2", 00:20:32.016 "adrfam": "ipv4", 00:20:32.016 "trsvcid": "4420", 00:20:32.016 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:32.016 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:32.016 "hdgst": false, 00:20:32.016 "ddgst": false 00:20:32.016 }, 00:20:32.016 "method": "bdev_nvme_attach_controller" 00:20:32.016 },{ 00:20:32.016 "params": { 00:20:32.016 "name": "Nvme3", 00:20:32.016 "trtype": "tcp", 00:20:32.016 "traddr": "10.0.0.2", 00:20:32.016 "adrfam": "ipv4", 00:20:32.016 "trsvcid": "4420", 00:20:32.016 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:32.016 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:32.016 "hdgst": false, 00:20:32.016 "ddgst": false 00:20:32.016 }, 00:20:32.016 "method": "bdev_nvme_attach_controller" 00:20:32.016 },{ 00:20:32.016 "params": { 00:20:32.016 "name": "Nvme4", 00:20:32.016 "trtype": "tcp", 00:20:32.016 "traddr": "10.0.0.2", 00:20:32.016 "adrfam": "ipv4", 00:20:32.016 "trsvcid": "4420", 00:20:32.016 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:32.016 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:32.016 "hdgst": false, 00:20:32.016 "ddgst": false 00:20:32.016 }, 00:20:32.016 "method": "bdev_nvme_attach_controller" 00:20:32.016 },{ 00:20:32.016 "params": { 00:20:32.016 "name": "Nvme5", 00:20:32.016 "trtype": "tcp", 00:20:32.016 "traddr": "10.0.0.2", 00:20:32.016 "adrfam": "ipv4", 00:20:32.016 "trsvcid": "4420", 00:20:32.016 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:32.016 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:32.016 "hdgst": false, 00:20:32.016 "ddgst": false 00:20:32.016 }, 00:20:32.016 "method": "bdev_nvme_attach_controller" 00:20:32.016 },{ 00:20:32.016 "params": { 00:20:32.016 "name": "Nvme6", 00:20:32.016 "trtype": "tcp", 00:20:32.016 "traddr": "10.0.0.2", 00:20:32.016 "adrfam": "ipv4", 00:20:32.016 "trsvcid": "4420", 00:20:32.016 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:32.016 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:32.016 "hdgst": false, 00:20:32.016 "ddgst": false 00:20:32.016 }, 00:20:32.016 "method": "bdev_nvme_attach_controller" 00:20:32.016 },{ 00:20:32.016 "params": { 00:20:32.016 "name": "Nvme7", 00:20:32.016 "trtype": "tcp", 00:20:32.016 "traddr": "10.0.0.2", 00:20:32.016 "adrfam": "ipv4", 00:20:32.016 "trsvcid": "4420", 00:20:32.016 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:32.016 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:32.016 "hdgst": false, 00:20:32.016 "ddgst": false 00:20:32.016 }, 00:20:32.016 "method": "bdev_nvme_attach_controller" 00:20:32.016 },{ 00:20:32.016 "params": { 00:20:32.016 "name": "Nvme8", 00:20:32.016 "trtype": "tcp", 00:20:32.016 "traddr": "10.0.0.2", 00:20:32.016 "adrfam": "ipv4", 00:20:32.016 "trsvcid": "4420", 00:20:32.016 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:32.016 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:32.016 "hdgst": false, 00:20:32.016 "ddgst": false 00:20:32.016 }, 00:20:32.016 "method": "bdev_nvme_attach_controller" 00:20:32.016 },{ 00:20:32.016 "params": { 00:20:32.016 "name": "Nvme9", 00:20:32.016 "trtype": "tcp", 00:20:32.016 "traddr": "10.0.0.2", 00:20:32.016 "adrfam": "ipv4", 00:20:32.016 "trsvcid": "4420", 00:20:32.016 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:32.016 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:32.016 "hdgst": false, 00:20:32.016 "ddgst": false 00:20:32.016 }, 00:20:32.016 "method": "bdev_nvme_attach_controller" 00:20:32.016 },{ 00:20:32.016 "params": { 00:20:32.016 "name": "Nvme10", 00:20:32.016 "trtype": "tcp", 00:20:32.016 "traddr": "10.0.0.2", 00:20:32.016 "adrfam": "ipv4", 00:20:32.016 "trsvcid": "4420", 00:20:32.016 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:32.016 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:32.016 "hdgst": false, 00:20:32.016 "ddgst": false 00:20:32.016 }, 00:20:32.016 "method": "bdev_nvme_attach_controller" 00:20:32.016 }' 00:20:32.016 [2024-07-15 21:42:22.695618] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:20:32.016 [2024-07-15 21:42:22.695713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid389340 ] 00:20:32.016 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.016 [2024-07-15 21:42:22.754346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.273 [2024-07-15 21:42:22.854313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.642 Running I/O for 1 seconds... 00:20:35.010 00:20:35.010 Latency(us) 00:20:35.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.010 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.010 Verification LBA range: start 0x0 length 0x400 00:20:35.010 Nvme1n1 : 1.09 176.91 11.06 0.00 0.00 357732.50 37865.24 292047.83 00:20:35.010 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.010 Verification LBA range: start 0x0 length 0x400 00:20:35.010 Nvme2n1 : 1.17 219.48 13.72 0.00 0.00 283721.01 18544.26 271853.04 00:20:35.010 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.010 Verification LBA range: start 0x0 length 0x400 00:20:35.010 Nvme3n1 : 1.12 227.59 14.22 0.00 0.00 268769.66 18252.99 282727.16 00:20:35.010 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.010 Verification LBA range: start 0x0 length 0x400 00:20:35.010 Nvme4n1 : 1.18 216.52 13.53 0.00 0.00 278294.76 19806.44 260978.92 00:20:35.010 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.010 Verification LBA range: start 0x0 length 0x400 00:20:35.010 Nvme5n1 : 1.18 221.56 13.85 0.00 0.00 266595.23 4854.52 274959.93 00:20:35.010 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.010 Verification LBA range: start 0x0 length 0x400 00:20:35.010 Nvme6n1 : 1.18 216.12 13.51 0.00 0.00 269363.58 20680.25 298261.62 00:20:35.010 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.010 Verification LBA range: start 0x0 length 0x400 00:20:35.010 Nvme7n1 : 1.16 220.81 13.80 0.00 0.00 258490.22 20583.16 281173.71 00:20:35.010 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.010 Verification LBA range: start 0x0 length 0x400 00:20:35.010 Nvme8n1 : 1.19 268.69 16.79 0.00 0.00 208258.39 13301.38 270299.59 00:20:35.010 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.010 Verification LBA range: start 0x0 length 0x400 00:20:35.010 Nvme9n1 : 1.18 236.56 14.78 0.00 0.00 230163.20 7864.32 278066.82 00:20:35.010 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.010 Verification LBA range: start 0x0 length 0x400 00:20:35.010 Nvme10n1 : 1.19 217.96 13.62 0.00 0.00 248229.36 3106.89 292047.83 00:20:35.010 =================================================================================================================== 00:20:35.010 Total : 2222.21 138.89 0.00 0.00 262927.57 3106.89 298261.62 00:20:35.010 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:20:35.011 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:35.011 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:35.011 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:35.011 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:35.011 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:35.011 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:20:35.011 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:35.011 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:20:35.011 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:35.011 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:35.011 rmmod nvme_tcp 00:20:35.011 rmmod nvme_fabrics 00:20:35.011 rmmod nvme_keyring 00:20:35.011 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:35.011 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:20:35.011 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:20:35.011 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 388946 ']' 00:20:35.011 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 388946 00:20:35.011 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 388946 ']' 00:20:35.011 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 388946 00:20:35.011 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:20:35.011 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:35.011 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 388946 00:20:35.268 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:35.268 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:35.268 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 388946' 00:20:35.268 killing process with pid 388946 00:20:35.268 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 388946 00:20:35.268 21:42:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 388946 00:20:35.544 21:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:35.544 21:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:35.544 21:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:35.545 21:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:35.545 21:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:35.545 21:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.545 21:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.545 21:42:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.451 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:37.451 00:20:37.451 real 0m11.357s 00:20:37.451 user 0m34.006s 00:20:37.451 sys 0m2.838s 00:20:37.451 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:37.451 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:37.451 ************************************ 00:20:37.451 END TEST nvmf_shutdown_tc1 00:20:37.451 ************************************ 00:20:37.451 21:42:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:37.451 21:42:28 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:37.451 21:42:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:37.451 21:42:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:37.451 21:42:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:37.709 ************************************ 00:20:37.709 START TEST nvmf_shutdown_tc2 00:20:37.709 ************************************ 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:37.709 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:37.709 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:37.709 Found net devices under 0000:08:00.0: cvl_0_0 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:37.709 Found net devices under 0000:08:00.1: cvl_0_1 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:37.709 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:37.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:20:37.710 00:20:37.710 --- 10.0.0.2 ping statistics --- 00:20:37.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.710 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:37.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:20:37.710 00:20:37.710 --- 10.0.0.1 ping statistics --- 00:20:37.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.710 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=389966 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 389966 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 389966 ']' 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:37.710 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:37.710 [2024-07-15 21:42:28.494818] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:20:37.710 [2024-07-15 21:42:28.494919] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.968 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.968 [2024-07-15 21:42:28.561035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:37.968 [2024-07-15 21:42:28.677946] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.968 [2024-07-15 21:42:28.678002] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.968 [2024-07-15 21:42:28.678019] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.968 [2024-07-15 21:42:28.678032] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.968 [2024-07-15 21:42:28.678044] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.968 [2024-07-15 21:42:28.678128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.968 [2024-07-15 21:42:28.678197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:37.968 [2024-07-15 21:42:28.678529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:37.968 [2024-07-15 21:42:28.678563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:38.226 [2024-07-15 21:42:28.813799] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.226 21:42:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:38.226 Malloc1 00:20:38.226 [2024-07-15 21:42:28.886752] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.226 Malloc2 00:20:38.226 Malloc3 00:20:38.226 Malloc4 00:20:38.484 Malloc5 00:20:38.484 Malloc6 00:20:38.484 Malloc7 00:20:38.484 Malloc8 00:20:38.484 Malloc9 00:20:38.484 Malloc10 00:20:38.741 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.741 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=390084 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 390084 /var/tmp/bdevperf.sock 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 390084 ']' 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:38.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:38.742 { 00:20:38.742 "params": { 00:20:38.742 "name": "Nvme$subsystem", 00:20:38.742 "trtype": "$TEST_TRANSPORT", 00:20:38.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.742 "adrfam": "ipv4", 00:20:38.742 "trsvcid": "$NVMF_PORT", 00:20:38.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.742 "hdgst": ${hdgst:-false}, 00:20:38.742 "ddgst": ${ddgst:-false} 00:20:38.742 }, 00:20:38.742 "method": "bdev_nvme_attach_controller" 00:20:38.742 } 00:20:38.742 EOF 00:20:38.742 )") 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:38.742 { 00:20:38.742 "params": { 00:20:38.742 "name": "Nvme$subsystem", 00:20:38.742 "trtype": "$TEST_TRANSPORT", 00:20:38.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.742 "adrfam": "ipv4", 00:20:38.742 "trsvcid": "$NVMF_PORT", 00:20:38.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.742 "hdgst": ${hdgst:-false}, 00:20:38.742 "ddgst": ${ddgst:-false} 00:20:38.742 }, 00:20:38.742 "method": "bdev_nvme_attach_controller" 00:20:38.742 } 00:20:38.742 EOF 00:20:38.742 )") 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:38.742 { 00:20:38.742 "params": { 00:20:38.742 "name": "Nvme$subsystem", 00:20:38.742 "trtype": "$TEST_TRANSPORT", 00:20:38.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.742 "adrfam": "ipv4", 00:20:38.742 "trsvcid": "$NVMF_PORT", 00:20:38.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.742 "hdgst": ${hdgst:-false}, 00:20:38.742 "ddgst": ${ddgst:-false} 00:20:38.742 }, 00:20:38.742 "method": "bdev_nvme_attach_controller" 00:20:38.742 } 00:20:38.742 EOF 00:20:38.742 )") 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:38.742 { 00:20:38.742 "params": { 00:20:38.742 "name": "Nvme$subsystem", 00:20:38.742 "trtype": "$TEST_TRANSPORT", 00:20:38.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.742 "adrfam": "ipv4", 00:20:38.742 "trsvcid": "$NVMF_PORT", 00:20:38.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.742 "hdgst": ${hdgst:-false}, 00:20:38.742 "ddgst": ${ddgst:-false} 00:20:38.742 }, 00:20:38.742 "method": "bdev_nvme_attach_controller" 00:20:38.742 } 00:20:38.742 EOF 00:20:38.742 )") 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:38.742 { 00:20:38.742 "params": { 00:20:38.742 "name": "Nvme$subsystem", 00:20:38.742 "trtype": "$TEST_TRANSPORT", 00:20:38.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.742 "adrfam": "ipv4", 00:20:38.742 "trsvcid": "$NVMF_PORT", 00:20:38.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.742 "hdgst": ${hdgst:-false}, 00:20:38.742 "ddgst": ${ddgst:-false} 00:20:38.742 }, 00:20:38.742 "method": "bdev_nvme_attach_controller" 00:20:38.742 } 00:20:38.742 EOF 00:20:38.742 )") 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:38.742 { 00:20:38.742 "params": { 00:20:38.742 "name": "Nvme$subsystem", 00:20:38.742 "trtype": "$TEST_TRANSPORT", 00:20:38.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.742 "adrfam": "ipv4", 00:20:38.742 "trsvcid": "$NVMF_PORT", 00:20:38.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.742 "hdgst": ${hdgst:-false}, 00:20:38.742 "ddgst": ${ddgst:-false} 00:20:38.742 }, 00:20:38.742 "method": "bdev_nvme_attach_controller" 00:20:38.742 } 00:20:38.742 EOF 00:20:38.742 )") 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:38.742 { 00:20:38.742 "params": { 00:20:38.742 "name": "Nvme$subsystem", 00:20:38.742 "trtype": "$TEST_TRANSPORT", 00:20:38.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.742 "adrfam": "ipv4", 00:20:38.742 "trsvcid": "$NVMF_PORT", 00:20:38.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.742 "hdgst": ${hdgst:-false}, 00:20:38.742 "ddgst": ${ddgst:-false} 00:20:38.742 }, 00:20:38.742 "method": "bdev_nvme_attach_controller" 00:20:38.742 } 00:20:38.742 EOF 00:20:38.742 )") 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:38.742 { 00:20:38.742 "params": { 00:20:38.742 "name": "Nvme$subsystem", 00:20:38.742 "trtype": "$TEST_TRANSPORT", 00:20:38.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.742 "adrfam": "ipv4", 00:20:38.742 "trsvcid": "$NVMF_PORT", 00:20:38.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.742 "hdgst": ${hdgst:-false}, 00:20:38.742 "ddgst": ${ddgst:-false} 00:20:38.742 }, 00:20:38.742 "method": "bdev_nvme_attach_controller" 00:20:38.742 } 00:20:38.742 EOF 00:20:38.742 )") 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:38.742 { 00:20:38.742 "params": { 00:20:38.742 "name": "Nvme$subsystem", 00:20:38.742 "trtype": "$TEST_TRANSPORT", 00:20:38.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.742 "adrfam": "ipv4", 00:20:38.742 "trsvcid": "$NVMF_PORT", 00:20:38.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.742 "hdgst": ${hdgst:-false}, 00:20:38.742 "ddgst": ${ddgst:-false} 00:20:38.742 }, 00:20:38.742 "method": "bdev_nvme_attach_controller" 00:20:38.742 } 00:20:38.742 EOF 00:20:38.742 )") 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:38.742 { 00:20:38.742 "params": { 00:20:38.742 "name": "Nvme$subsystem", 00:20:38.742 "trtype": "$TEST_TRANSPORT", 00:20:38.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.742 "adrfam": "ipv4", 00:20:38.742 "trsvcid": "$NVMF_PORT", 00:20:38.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.742 "hdgst": ${hdgst:-false}, 00:20:38.742 "ddgst": ${ddgst:-false} 00:20:38.742 }, 00:20:38.742 "method": "bdev_nvme_attach_controller" 00:20:38.742 } 00:20:38.742 EOF 00:20:38.742 )") 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:20:38.742 21:42:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:38.742 "params": { 00:20:38.742 "name": "Nvme1", 00:20:38.742 "trtype": "tcp", 00:20:38.742 "traddr": "10.0.0.2", 00:20:38.742 "adrfam": "ipv4", 00:20:38.742 "trsvcid": "4420", 00:20:38.742 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.742 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.742 "hdgst": false, 00:20:38.742 "ddgst": false 00:20:38.742 }, 00:20:38.742 "method": "bdev_nvme_attach_controller" 00:20:38.742 },{ 00:20:38.742 "params": { 00:20:38.742 "name": "Nvme2", 00:20:38.742 "trtype": "tcp", 00:20:38.742 "traddr": "10.0.0.2", 00:20:38.742 "adrfam": "ipv4", 00:20:38.742 "trsvcid": "4420", 00:20:38.742 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:38.742 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:38.742 "hdgst": false, 00:20:38.742 "ddgst": false 00:20:38.742 }, 00:20:38.742 "method": "bdev_nvme_attach_controller" 00:20:38.742 },{ 00:20:38.742 "params": { 00:20:38.742 "name": "Nvme3", 00:20:38.742 "trtype": "tcp", 00:20:38.742 "traddr": "10.0.0.2", 00:20:38.742 "adrfam": "ipv4", 00:20:38.742 "trsvcid": "4420", 00:20:38.742 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:38.742 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:38.742 "hdgst": false, 00:20:38.742 "ddgst": false 00:20:38.742 }, 00:20:38.742 "method": "bdev_nvme_attach_controller" 00:20:38.742 },{ 00:20:38.742 "params": { 00:20:38.742 "name": "Nvme4", 00:20:38.742 "trtype": "tcp", 00:20:38.742 "traddr": "10.0.0.2", 00:20:38.742 "adrfam": "ipv4", 00:20:38.742 "trsvcid": "4420", 00:20:38.742 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:38.742 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:38.742 "hdgst": false, 00:20:38.742 "ddgst": false 00:20:38.742 }, 00:20:38.742 "method": "bdev_nvme_attach_controller" 00:20:38.742 },{ 00:20:38.742 "params": { 00:20:38.742 "name": "Nvme5", 00:20:38.742 "trtype": "tcp", 00:20:38.742 "traddr": "10.0.0.2", 00:20:38.742 "adrfam": "ipv4", 00:20:38.742 "trsvcid": "4420", 00:20:38.742 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:38.742 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:38.742 "hdgst": false, 00:20:38.742 "ddgst": false 00:20:38.742 }, 00:20:38.742 "method": "bdev_nvme_attach_controller" 00:20:38.742 },{ 00:20:38.742 "params": { 00:20:38.742 "name": "Nvme6", 00:20:38.742 "trtype": "tcp", 00:20:38.742 "traddr": "10.0.0.2", 00:20:38.742 "adrfam": "ipv4", 00:20:38.742 "trsvcid": "4420", 00:20:38.742 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:38.742 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:38.742 "hdgst": false, 00:20:38.742 "ddgst": false 00:20:38.742 }, 00:20:38.742 "method": "bdev_nvme_attach_controller" 00:20:38.742 },{ 00:20:38.742 "params": { 00:20:38.742 "name": "Nvme7", 00:20:38.742 "trtype": "tcp", 00:20:38.742 "traddr": "10.0.0.2", 00:20:38.742 "adrfam": "ipv4", 00:20:38.742 "trsvcid": "4420", 00:20:38.742 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:38.742 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:38.742 "hdgst": false, 00:20:38.742 "ddgst": false 00:20:38.742 }, 00:20:38.742 "method": "bdev_nvme_attach_controller" 00:20:38.742 },{ 00:20:38.742 "params": { 00:20:38.742 "name": "Nvme8", 00:20:38.742 "trtype": "tcp", 00:20:38.742 "traddr": "10.0.0.2", 00:20:38.742 "adrfam": "ipv4", 00:20:38.742 "trsvcid": "4420", 00:20:38.742 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:38.742 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:38.742 "hdgst": false, 00:20:38.742 "ddgst": false 00:20:38.742 }, 00:20:38.742 "method": "bdev_nvme_attach_controller" 00:20:38.742 },{ 00:20:38.742 "params": { 00:20:38.742 "name": "Nvme9", 00:20:38.742 "trtype": "tcp", 00:20:38.742 "traddr": "10.0.0.2", 00:20:38.742 "adrfam": "ipv4", 00:20:38.742 "trsvcid": "4420", 00:20:38.742 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:38.742 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:38.742 "hdgst": false, 00:20:38.742 "ddgst": false 00:20:38.742 }, 00:20:38.742 "method": "bdev_nvme_attach_controller" 00:20:38.742 },{ 00:20:38.742 "params": { 00:20:38.742 "name": "Nvme10", 00:20:38.742 "trtype": "tcp", 00:20:38.742 "traddr": "10.0.0.2", 00:20:38.742 "adrfam": "ipv4", 00:20:38.742 "trsvcid": "4420", 00:20:38.742 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:38.742 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:38.742 "hdgst": false, 00:20:38.742 "ddgst": false 00:20:38.742 }, 00:20:38.742 "method": "bdev_nvme_attach_controller" 00:20:38.742 }' 00:20:38.742 [2024-07-15 21:42:29.377770] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:20:38.743 [2024-07-15 21:42:29.377860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid390084 ] 00:20:38.743 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.743 [2024-07-15 21:42:29.435337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.998 [2024-07-15 21:42:29.535421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.367 Running I/O for 10 seconds... 00:20:40.929 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:40.929 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:40.929 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:40.929 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.929 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.929 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.929 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:40.929 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:40.929 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:40.929 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:20:40.929 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:20:40.929 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:40.929 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:40.929 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:40.929 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:40.929 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.929 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.929 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.929 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:40.929 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:40.929 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:41.186 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:41.186 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:41.186 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:41.186 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:41.186 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.186 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:41.186 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.186 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:41.186 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:41.186 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:20:41.187 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:20:41.187 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:20:41.187 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 390084 00:20:41.187 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 390084 ']' 00:20:41.187 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 390084 00:20:41.187 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:41.187 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:41.187 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 390084 00:20:41.187 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:41.187 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:41.187 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 390084' 00:20:41.187 killing process with pid 390084 00:20:41.187 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 390084 00:20:41.187 21:42:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 390084 00:20:41.187 Received shutdown signal, test time was about 0.835122 seconds 00:20:41.187 00:20:41.187 Latency(us) 00:20:41.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.187 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.187 Verification LBA range: start 0x0 length 0x400 00:20:41.187 Nvme1n1 : 0.81 235.67 14.73 0.00 0.00 266905.73 20971.52 245444.46 00:20:41.187 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.187 Verification LBA range: start 0x0 length 0x400 00:20:41.187 Nvme2n1 : 0.83 230.14 14.38 0.00 0.00 267975.74 21845.33 278066.82 00:20:41.187 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.187 Verification LBA range: start 0x0 length 0x400 00:20:41.187 Nvme3n1 : 0.81 237.41 14.84 0.00 0.00 253030.15 35340.89 259425.47 00:20:41.187 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.187 Verification LBA range: start 0x0 length 0x400 00:20:41.187 Nvme4n1 : 0.83 232.54 14.53 0.00 0.00 251406.41 18155.90 276513.37 00:20:41.187 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.187 Verification LBA range: start 0x0 length 0x400 00:20:41.187 Nvme5n1 : 0.79 161.64 10.10 0.00 0.00 351759.17 19709.35 285834.05 00:20:41.187 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.187 Verification LBA range: start 0x0 length 0x400 00:20:41.187 Nvme6n1 : 0.82 234.64 14.66 0.00 0.00 236999.87 21068.61 234570.33 00:20:41.187 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.187 Verification LBA range: start 0x0 length 0x400 00:20:41.187 Nvme7n1 : 0.82 232.87 14.55 0.00 0.00 233195.39 39612.87 273406.48 00:20:41.187 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.187 Verification LBA range: start 0x0 length 0x400 00:20:41.187 Nvme8n1 : 0.81 238.19 14.89 0.00 0.00 220709.67 22524.97 271853.04 00:20:41.187 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.187 Verification LBA range: start 0x0 length 0x400 00:20:41.187 Nvme9n1 : 0.83 231.18 14.45 0.00 0.00 222552.62 21651.15 270299.59 00:20:41.187 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:41.187 Verification LBA range: start 0x0 length 0x400 00:20:41.187 Nvme10n1 : 0.81 162.97 10.19 0.00 0.00 300378.85 10048.85 307582.29 00:20:41.187 =================================================================================================================== 00:20:41.187 Total : 2197.23 137.33 0.00 0.00 255906.51 10048.85 307582.29 00:20:41.445 21:42:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:42.376 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 389966 00:20:42.377 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:42.377 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:42.377 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:42.377 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:42.377 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:42.377 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:42.377 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:42.377 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:42.377 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:42.377 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:42.377 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:42.377 rmmod nvme_tcp 00:20:42.377 rmmod nvme_fabrics 00:20:42.377 rmmod nvme_keyring 00:20:42.377 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:42.377 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:42.377 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:42.377 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 389966 ']' 00:20:42.377 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 389966 00:20:42.377 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 389966 ']' 00:20:42.377 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 389966 00:20:42.377 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:42.377 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:42.377 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 389966 00:20:42.635 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:42.635 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:42.635 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 389966' 00:20:42.635 killing process with pid 389966 00:20:42.635 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 389966 00:20:42.635 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 389966 00:20:42.893 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:42.893 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:42.893 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:42.893 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:42.893 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:42.893 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.893 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.893 21:42:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.801 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:44.801 00:20:44.801 real 0m7.295s 00:20:44.801 user 0m21.679s 00:20:44.801 sys 0m1.335s 00:20:44.801 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:44.801 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:44.801 ************************************ 00:20:44.801 END TEST nvmf_shutdown_tc2 00:20:44.801 ************************************ 00:20:44.801 21:42:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:44.801 21:42:35 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:44.801 21:42:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:44.801 21:42:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:44.801 21:42:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:45.061 ************************************ 00:20:45.061 START TEST nvmf_shutdown_tc3 00:20:45.061 ************************************ 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:45.061 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:45.061 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:45.061 Found net devices under 0000:08:00.0: cvl_0_0 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:45.061 Found net devices under 0000:08:00.1: cvl_0_1 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:45.061 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:45.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:45.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:20:45.062 00:20:45.062 --- 10.0.0.2 ping statistics --- 00:20:45.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.062 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:45.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:45.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:20:45.062 00:20:45.062 --- 10.0.0.1 ping statistics --- 00:20:45.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.062 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=390793 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 390793 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 390793 ']' 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:45.062 21:42:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:45.062 [2024-07-15 21:42:35.849046] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:20:45.062 [2024-07-15 21:42:35.849129] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.320 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.320 [2024-07-15 21:42:35.914408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:45.320 [2024-07-15 21:42:36.031153] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.320 [2024-07-15 21:42:36.031211] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.320 [2024-07-15 21:42:36.031227] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.320 [2024-07-15 21:42:36.031241] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.320 [2024-07-15 21:42:36.031252] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.320 [2024-07-15 21:42:36.031357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.320 [2024-07-15 21:42:36.031493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:45.320 [2024-07-15 21:42:36.031624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:45.320 [2024-07-15 21:42:36.031634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.578 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:45.578 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:45.578 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:45.578 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:45.578 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:45.578 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.578 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:45.579 [2024-07-15 21:42:36.189867] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.579 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:45.579 Malloc1 00:20:45.579 [2024-07-15 21:42:36.268298] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.579 Malloc2 00:20:45.579 Malloc3 00:20:45.836 Malloc4 00:20:45.836 Malloc5 00:20:45.836 Malloc6 00:20:45.836 Malloc7 00:20:45.836 Malloc8 00:20:45.836 Malloc9 00:20:46.095 Malloc10 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=390941 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 390941 /var/tmp/bdevperf.sock 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 390941 ']' 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:46.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.095 { 00:20:46.095 "params": { 00:20:46.095 "name": "Nvme$subsystem", 00:20:46.095 "trtype": "$TEST_TRANSPORT", 00:20:46.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.095 "adrfam": "ipv4", 00:20:46.095 "trsvcid": "$NVMF_PORT", 00:20:46.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.095 "hdgst": ${hdgst:-false}, 00:20:46.095 "ddgst": ${ddgst:-false} 00:20:46.095 }, 00:20:46.095 "method": "bdev_nvme_attach_controller" 00:20:46.095 } 00:20:46.095 EOF 00:20:46.095 )") 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.095 { 00:20:46.095 "params": { 00:20:46.095 "name": "Nvme$subsystem", 00:20:46.095 "trtype": "$TEST_TRANSPORT", 00:20:46.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.095 "adrfam": "ipv4", 00:20:46.095 "trsvcid": "$NVMF_PORT", 00:20:46.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.095 "hdgst": ${hdgst:-false}, 00:20:46.095 "ddgst": ${ddgst:-false} 00:20:46.095 }, 00:20:46.095 "method": "bdev_nvme_attach_controller" 00:20:46.095 } 00:20:46.095 EOF 00:20:46.095 )") 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.095 { 00:20:46.095 "params": { 00:20:46.095 "name": "Nvme$subsystem", 00:20:46.095 "trtype": "$TEST_TRANSPORT", 00:20:46.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.095 "adrfam": "ipv4", 00:20:46.095 "trsvcid": "$NVMF_PORT", 00:20:46.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.095 "hdgst": ${hdgst:-false}, 00:20:46.095 "ddgst": ${ddgst:-false} 00:20:46.095 }, 00:20:46.095 "method": "bdev_nvme_attach_controller" 00:20:46.095 } 00:20:46.095 EOF 00:20:46.095 )") 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.095 { 00:20:46.095 "params": { 00:20:46.095 "name": "Nvme$subsystem", 00:20:46.095 "trtype": "$TEST_TRANSPORT", 00:20:46.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.095 "adrfam": "ipv4", 00:20:46.095 "trsvcid": "$NVMF_PORT", 00:20:46.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.095 "hdgst": ${hdgst:-false}, 00:20:46.095 "ddgst": ${ddgst:-false} 00:20:46.095 }, 00:20:46.095 "method": "bdev_nvme_attach_controller" 00:20:46.095 } 00:20:46.095 EOF 00:20:46.095 )") 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.095 { 00:20:46.095 "params": { 00:20:46.095 "name": "Nvme$subsystem", 00:20:46.095 "trtype": "$TEST_TRANSPORT", 00:20:46.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.095 "adrfam": "ipv4", 00:20:46.095 "trsvcid": "$NVMF_PORT", 00:20:46.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.095 "hdgst": ${hdgst:-false}, 00:20:46.095 "ddgst": ${ddgst:-false} 00:20:46.095 }, 00:20:46.095 "method": "bdev_nvme_attach_controller" 00:20:46.095 } 00:20:46.095 EOF 00:20:46.095 )") 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.095 { 00:20:46.095 "params": { 00:20:46.095 "name": "Nvme$subsystem", 00:20:46.095 "trtype": "$TEST_TRANSPORT", 00:20:46.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.095 "adrfam": "ipv4", 00:20:46.095 "trsvcid": "$NVMF_PORT", 00:20:46.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.095 "hdgst": ${hdgst:-false}, 00:20:46.095 "ddgst": ${ddgst:-false} 00:20:46.095 }, 00:20:46.095 "method": "bdev_nvme_attach_controller" 00:20:46.095 } 00:20:46.095 EOF 00:20:46.095 )") 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.095 { 00:20:46.095 "params": { 00:20:46.095 "name": "Nvme$subsystem", 00:20:46.095 "trtype": "$TEST_TRANSPORT", 00:20:46.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.095 "adrfam": "ipv4", 00:20:46.095 "trsvcid": "$NVMF_PORT", 00:20:46.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.095 "hdgst": ${hdgst:-false}, 00:20:46.095 "ddgst": ${ddgst:-false} 00:20:46.095 }, 00:20:46.095 "method": "bdev_nvme_attach_controller" 00:20:46.095 } 00:20:46.095 EOF 00:20:46.095 )") 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.095 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.095 { 00:20:46.095 "params": { 00:20:46.095 "name": "Nvme$subsystem", 00:20:46.095 "trtype": "$TEST_TRANSPORT", 00:20:46.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.095 "adrfam": "ipv4", 00:20:46.095 "trsvcid": "$NVMF_PORT", 00:20:46.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.096 "hdgst": ${hdgst:-false}, 00:20:46.096 "ddgst": ${ddgst:-false} 00:20:46.096 }, 00:20:46.096 "method": "bdev_nvme_attach_controller" 00:20:46.096 } 00:20:46.096 EOF 00:20:46.096 )") 00:20:46.096 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:46.096 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.096 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.096 { 00:20:46.096 "params": { 00:20:46.096 "name": "Nvme$subsystem", 00:20:46.096 "trtype": "$TEST_TRANSPORT", 00:20:46.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.096 "adrfam": "ipv4", 00:20:46.096 "trsvcid": "$NVMF_PORT", 00:20:46.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.096 "hdgst": ${hdgst:-false}, 00:20:46.096 "ddgst": ${ddgst:-false} 00:20:46.096 }, 00:20:46.096 "method": "bdev_nvme_attach_controller" 00:20:46.096 } 00:20:46.096 EOF 00:20:46.096 )") 00:20:46.096 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:46.096 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:46.096 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:46.096 { 00:20:46.096 "params": { 00:20:46.096 "name": "Nvme$subsystem", 00:20:46.096 "trtype": "$TEST_TRANSPORT", 00:20:46.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.096 "adrfam": "ipv4", 00:20:46.096 "trsvcid": "$NVMF_PORT", 00:20:46.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.096 "hdgst": ${hdgst:-false}, 00:20:46.096 "ddgst": ${ddgst:-false} 00:20:46.096 }, 00:20:46.096 "method": "bdev_nvme_attach_controller" 00:20:46.096 } 00:20:46.096 EOF 00:20:46.096 )") 00:20:46.096 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:46.096 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:20:46.096 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:20:46.096 21:42:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:46.096 "params": { 00:20:46.096 "name": "Nvme1", 00:20:46.096 "trtype": "tcp", 00:20:46.096 "traddr": "10.0.0.2", 00:20:46.096 "adrfam": "ipv4", 00:20:46.096 "trsvcid": "4420", 00:20:46.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.096 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:46.096 "hdgst": false, 00:20:46.096 "ddgst": false 00:20:46.096 }, 00:20:46.096 "method": "bdev_nvme_attach_controller" 00:20:46.096 },{ 00:20:46.096 "params": { 00:20:46.096 "name": "Nvme2", 00:20:46.096 "trtype": "tcp", 00:20:46.096 "traddr": "10.0.0.2", 00:20:46.096 "adrfam": "ipv4", 00:20:46.096 "trsvcid": "4420", 00:20:46.096 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:46.096 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:46.096 "hdgst": false, 00:20:46.096 "ddgst": false 00:20:46.096 }, 00:20:46.096 "method": "bdev_nvme_attach_controller" 00:20:46.096 },{ 00:20:46.096 "params": { 00:20:46.096 "name": "Nvme3", 00:20:46.096 "trtype": "tcp", 00:20:46.096 "traddr": "10.0.0.2", 00:20:46.096 "adrfam": "ipv4", 00:20:46.096 "trsvcid": "4420", 00:20:46.096 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:46.096 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:46.096 "hdgst": false, 00:20:46.096 "ddgst": false 00:20:46.096 }, 00:20:46.096 "method": "bdev_nvme_attach_controller" 00:20:46.096 },{ 00:20:46.096 "params": { 00:20:46.096 "name": "Nvme4", 00:20:46.096 "trtype": "tcp", 00:20:46.096 "traddr": "10.0.0.2", 00:20:46.096 "adrfam": "ipv4", 00:20:46.096 "trsvcid": "4420", 00:20:46.096 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:46.096 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:46.096 "hdgst": false, 00:20:46.096 "ddgst": false 00:20:46.096 }, 00:20:46.096 "method": "bdev_nvme_attach_controller" 00:20:46.096 },{ 00:20:46.096 "params": { 00:20:46.096 "name": "Nvme5", 00:20:46.096 "trtype": "tcp", 00:20:46.096 "traddr": "10.0.0.2", 00:20:46.096 "adrfam": "ipv4", 00:20:46.096 "trsvcid": "4420", 00:20:46.096 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:46.096 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:46.096 "hdgst": false, 00:20:46.096 "ddgst": false 00:20:46.096 }, 00:20:46.096 "method": "bdev_nvme_attach_controller" 00:20:46.096 },{ 00:20:46.096 "params": { 00:20:46.096 "name": "Nvme6", 00:20:46.096 "trtype": "tcp", 00:20:46.096 "traddr": "10.0.0.2", 00:20:46.096 "adrfam": "ipv4", 00:20:46.096 "trsvcid": "4420", 00:20:46.096 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:46.096 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:46.096 "hdgst": false, 00:20:46.096 "ddgst": false 00:20:46.096 }, 00:20:46.096 "method": "bdev_nvme_attach_controller" 00:20:46.096 },{ 00:20:46.096 "params": { 00:20:46.096 "name": "Nvme7", 00:20:46.096 "trtype": "tcp", 00:20:46.096 "traddr": "10.0.0.2", 00:20:46.096 "adrfam": "ipv4", 00:20:46.096 "trsvcid": "4420", 00:20:46.096 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:46.096 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:46.096 "hdgst": false, 00:20:46.096 "ddgst": false 00:20:46.096 }, 00:20:46.096 "method": "bdev_nvme_attach_controller" 00:20:46.096 },{ 00:20:46.096 "params": { 00:20:46.096 "name": "Nvme8", 00:20:46.096 "trtype": "tcp", 00:20:46.096 "traddr": "10.0.0.2", 00:20:46.096 "adrfam": "ipv4", 00:20:46.096 "trsvcid": "4420", 00:20:46.096 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:46.096 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:46.096 "hdgst": false, 00:20:46.096 "ddgst": false 00:20:46.096 }, 00:20:46.096 "method": "bdev_nvme_attach_controller" 00:20:46.096 },{ 00:20:46.096 "params": { 00:20:46.096 "name": "Nvme9", 00:20:46.096 "trtype": "tcp", 00:20:46.096 "traddr": "10.0.0.2", 00:20:46.096 "adrfam": "ipv4", 00:20:46.096 "trsvcid": "4420", 00:20:46.096 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:46.096 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:46.096 "hdgst": false, 00:20:46.096 "ddgst": false 00:20:46.096 }, 00:20:46.096 "method": "bdev_nvme_attach_controller" 00:20:46.096 },{ 00:20:46.096 "params": { 00:20:46.096 "name": "Nvme10", 00:20:46.096 "trtype": "tcp", 00:20:46.096 "traddr": "10.0.0.2", 00:20:46.096 "adrfam": "ipv4", 00:20:46.096 "trsvcid": "4420", 00:20:46.096 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:46.096 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:46.096 "hdgst": false, 00:20:46.096 "ddgst": false 00:20:46.096 }, 00:20:46.096 "method": "bdev_nvme_attach_controller" 00:20:46.096 }' 00:20:46.096 [2024-07-15 21:42:36.735096] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:20:46.096 [2024-07-15 21:42:36.735198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid390941 ] 00:20:46.096 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.096 [2024-07-15 21:42:36.792006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.355 [2024-07-15 21:42:36.891587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.725 Running I/O for 10 seconds... 00:20:47.982 21:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:47.982 21:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:47.982 21:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:47.982 21:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.982 21:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.983 21:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.983 21:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:47.983 21:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:47.983 21:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:47.983 21:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:47.983 21:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:47.983 21:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:47.983 21:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:47.983 21:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:47.983 21:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:47.983 21:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:47.983 21:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.983 21:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.239 21:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.239 21:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:48.239 21:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:48.239 21:42:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:48.496 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:48.496 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:48.496 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:48.496 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:48.496 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.496 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.496 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.496 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:48.496 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:48.497 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:48.773 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:48.773 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:48.773 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:48.773 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:48.773 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.773 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.773 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.773 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:48.773 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:48.773 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:48.773 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:48.773 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:48.773 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 390793 00:20:48.773 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 390793 ']' 00:20:48.773 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 390793 00:20:48.773 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:20:48.773 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:48.773 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 390793 00:20:48.773 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:48.773 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:48.774 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 390793' 00:20:48.774 killing process with pid 390793 00:20:48.774 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 390793 00:20:48.774 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 390793 00:20:48.774 [2024-07-15 21:42:39.404073] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404172] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404185] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404206] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404219] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404231] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404243] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404255] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404269] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404281] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404293] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404305] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404317] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404329] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404357] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404370] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404382] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404394] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404406] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404418] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404430] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404447] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404467] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404489] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404531] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404543] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404555] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404567] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404579] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404592] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404604] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404616] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404628] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404641] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404653] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404665] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404694] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404706] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404731] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404743] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404772] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404784] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404796] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404827] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404852] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404864] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404876] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404891] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404939] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404963] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404975] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404987] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.404999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1915ed0 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.406705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.406737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.406752] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.406765] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.406778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.406790] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.406802] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.406826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.406839] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.406851] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.406863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.406880] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.406903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.406922] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.406934] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.406946] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.406958] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.406970] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.406982] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.406994] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407011] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407047] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407064] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407076] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407087] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407107] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407183] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407199] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407221] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407256] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407269] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407286] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407299] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407355] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407380] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407392] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407411] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407446] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407459] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407471] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407495] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407508] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407520] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407535] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407550] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407562] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407574] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407589] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407602] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407615] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.407785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8030 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409363] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409404] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409416] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409428] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409489] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409501] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409561] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409573] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409585] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409597] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409609] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409644] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.774 [2024-07-15 21:42:39.409686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409711] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409723] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409748] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409760] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409772] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409784] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409795] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409807] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409831] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409880] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409891] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409928] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409940] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409952] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409964] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409976] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.409988] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.410001] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.410026] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.410049] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.410062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.410074] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.410087] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.410100] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.410112] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.410123] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.410136] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.410156] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.410168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1916370 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.410609] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:48.775 [2024-07-15 21:42:39.412148] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412222] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412234] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412259] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412289] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412301] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412332] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412344] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412357] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412369] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412381] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412404] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412417] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412430] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412457] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412470] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412487] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412499] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412548] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412561] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412573] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412585] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412597] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412610] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412627] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412640] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412652] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412664] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412743] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412760] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412855] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412886] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412916] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412929] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412941] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412976] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.412988] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.413000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.413012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.413023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.413035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.413047] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.413059] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.413070] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f5f70 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.413630] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:48.775 [2024-07-15 21:42:39.414343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414380] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414395] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414408] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414419] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414431] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414450] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414489] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414550] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414562] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414574] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414586] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414616] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414661] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414711] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414723] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414736] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414751] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414763] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414775] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414787] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with [2024-07-15 21:42:39.414812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:20:48.775 id:0 cdw10:00000000 cdw11:00000000 00:20:48.775 [2024-07-15 21:42:39.414834] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.775 [2024-07-15 21:42:39.414846] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414858] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with [2024-07-15 21:42:39.414859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(5) to be set 00:20:48.775 id:0 cdw10:00000000 cdw11:00000000 00:20:48.775 [2024-07-15 21:42:39.414874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.775 [2024-07-15 21:42:39.414872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.775 [2024-07-15 21:42:39.414896] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.775 [2024-07-15 21:42:39.414910] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.775 [2024-07-15 21:42:39.414923] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.775 [2024-07-15 21:42:39.414928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.775 [2024-07-15 21:42:39.414935] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.414942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8d490 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.414947] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.414959] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.414971] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.414982] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.414999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.415012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.415024] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.415031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.776 [2024-07-15 21:42:39.415051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 21:42:39.415048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.776 the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.415080] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with [2024-07-15 21:42:39.415080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(5) to be set 00:20:48.776 id:0 cdw10:00000000 cdw11:00000000 00:20:48.776 [2024-07-15 21:42:39.415094] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.415097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.776 [2024-07-15 21:42:39.415107] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.415111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.776 [2024-07-15 21:42:39.415119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.415125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.776 [2024-07-15 21:42:39.415131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.415148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.776 [2024-07-15 21:42:39.415160] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.415167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.776 [2024-07-15 21:42:39.415173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.415181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2b120 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.415186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.415198] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.415209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.415224] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.415226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.776 [2024-07-15 21:42:39.415236] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.415245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.776 [2024-07-15 21:42:39.415249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6430 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.415260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.776 [2024-07-15 21:42:39.415273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.776 [2024-07-15 21:42:39.415286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.776 [2024-07-15 21:42:39.415303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.776 [2024-07-15 21:42:39.415317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.776 [2024-07-15 21:42:39.415330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.776 [2024-07-15 21:42:39.415342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e34bf0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.415383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.776 [2024-07-15 21:42:39.415402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.776 [2024-07-15 21:42:39.415417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.776 [2024-07-15 21:42:39.415430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.776 [2024-07-15 21:42:39.415447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.776 [2024-07-15 21:42:39.415470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.776 [2024-07-15 21:42:39.415486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.776 [2024-07-15 21:42:39.415499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.776 [2024-07-15 21:42:39.415513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd46c0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.415555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.776 [2024-07-15 21:42:39.415573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.776 [2024-07-15 21:42:39.415589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.776 [2024-07-15 21:42:39.415602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.776 [2024-07-15 21:42:39.415615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.776 [2024-07-15 21:42:39.415629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.776 [2024-07-15 21:42:39.415642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.776 [2024-07-15 21:42:39.415655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.776 [2024-07-15 21:42:39.415670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e08ad0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.415767] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:48.776 [2024-07-15 21:42:39.416901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.416932] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.416946] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.416966] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.416987] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.416999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417032] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417060] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417083] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417100] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417125] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417145] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417184] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417206] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417227] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417255] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417295] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417320] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417384] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417396] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417408] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417420] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417432] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417445] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417484] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417503] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417520] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417532] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417557] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417569] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417589] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417607] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417620] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417644] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417709] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417729] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417746] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417795] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417829] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417851] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f68d0 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.417929] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:48.776 [2024-07-15 21:42:39.418350] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:48.776 [2024-07-15 21:42:39.419680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6d90 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.419712] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6d90 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.419736] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6d90 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.419749] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6d90 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.419767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6d90 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.419791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6d90 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.419807] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6d90 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.419830] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6d90 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.419844] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6d90 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.419857] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6d90 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.419869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6d90 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.419884] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6d90 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.419896] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6d90 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.419914] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6d90 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.419926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6d90 is same with the state(5) to be set 00:20:48.776 [2024-07-15 21:42:39.419939] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6d90 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.419955] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f6d90 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.419954] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:48.777 [2024-07-15 21:42:39.420744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.420771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.420785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.420797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.420809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.420822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.420839] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.420851] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.420864] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.420876] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.420888] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.420910] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.420923] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.420936] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.420948] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.420960] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.420972] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.420984] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.420996] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421025] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421037] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421053] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421066] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421094] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421107] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421150] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421163] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421184] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421196] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.421221] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421236] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.421248] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421264] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421276] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.421288] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.421300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1[2024-07-15 21:42:39.421313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421327] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with [2024-07-15 21:42:39.421327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:48.777 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.421342] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.421354] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.421366] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.421378] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 21:42:39.421391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421404] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.421415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.421427] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.421440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.421456] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.421468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.421481] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with [2024-07-15 21:42:39.421494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128the state(5) to be set 00:20:48.777 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.421507] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.421519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.421531] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.421543] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128[2024-07-15 21:42:39.421555] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 21:42:39.421568] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421582] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7230 is same with the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.421585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.421598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.421613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.421627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.421642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.421655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.421674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.421687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.421703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.421716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.421731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.421752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.421768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.421781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.421796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.421809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.421824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.421837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.421853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.421866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.421881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.421894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.421909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.421923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.421938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.421951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.421967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.421980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.421995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.422008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.422024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.422040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.422056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.422069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.422085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.422098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.422113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.422126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.422152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.422168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.422183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.422197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.422212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.422229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.422245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.422258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.422273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.422286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.422302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.422314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.422330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.422343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.422367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.422380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.422395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.422408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.422427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.422440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.422455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.422468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.422483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 [2024-07-15 21:42:39.422496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.777 [2024-07-15 21:42:39.422511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:12[2024-07-15 21:42:39.422505] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.777 the state(5) to be set 00:20:48.777 [2024-07-15 21:42:39.422528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.778 [2024-07-15 21:42:39.422532] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:12[2024-07-15 21:42:39.422545] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.778 the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 21:42:39.422559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.778 the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422573] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.778 [2024-07-15 21:42:39.422585] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.778 [2024-07-15 21:42:39.422597] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.778 [2024-07-15 21:42:39.422610] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.778 [2024-07-15 21:42:39.422622] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:12[2024-07-15 21:42:39.422635] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.778 the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 21:42:39.422648] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.778 the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.778 [2024-07-15 21:42:39.422680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.778 [2024-07-15 21:42:39.422692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.778 [2024-07-15 21:42:39.422704] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 21:42:39.422717] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.778 the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.778 [2024-07-15 21:42:39.422742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.778 [2024-07-15 21:42:39.422754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.778 [2024-07-15 21:42:39.422766] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.778 [2024-07-15 21:42:39.422778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422790] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with [2024-07-15 21:42:39.422791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:12the state(5) to be set 00:20:48.778 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.778 [2024-07-15 21:42:39.422804] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with [2024-07-15 21:42:39.422806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:48.778 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.778 [2024-07-15 21:42:39.422818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.778 [2024-07-15 21:42:39.422830] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with [2024-07-15 21:42:39.422835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:48.778 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.778 [2024-07-15 21:42:39.422849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.778 [2024-07-15 21:42:39.422861] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.778 [2024-07-15 21:42:39.422876] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.778 [2024-07-15 21:42:39.422889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.778 [2024-07-15 21:42:39.422901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.778 [2024-07-15 21:42:39.422913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.778 [2024-07-15 21:42:39.422925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with [2024-07-15 21:42:39.422939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:12the state(5) to be set 00:20:48.778 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.778 [2024-07-15 21:42:39.422953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.778 [2024-07-15 21:42:39.422956] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422969] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.778 [2024-07-15 21:42:39.422982] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.422987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.778 [2024-07-15 21:42:39.422994] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.778 [2024-07-15 21:42:39.423006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 21:42:39.423018] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.778 the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.778 [2024-07-15 21:42:39.423046] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.778 [2024-07-15 21:42:39.423058] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.778 [2024-07-15 21:42:39.423071] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.778 [2024-07-15 21:42:39.423083] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423095] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with [2024-07-15 21:42:39.423096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:12the state(5) to be set 00:20:48.778 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.778 [2024-07-15 21:42:39.423109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with [2024-07-15 21:42:39.423111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:48.778 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.778 [2024-07-15 21:42:39.423123] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.778 [2024-07-15 21:42:39.423135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.778 [2024-07-15 21:42:39.423155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:12[2024-07-15 21:42:39.423167] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.778 the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 21:42:39.423181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.778 the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423196] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258eab0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423207] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423223] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423247] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423258] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423262] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x258eab0 was disconnected and freed. reset controller. 00:20:48.778 [2024-07-15 21:42:39.423270] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423294] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423305] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423321] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.423336] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f76d0 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424026] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424040] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424053] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424066] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424102] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424114] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424144] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424157] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424169] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424180] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424192] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424204] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424221] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424253] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424296] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424308] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424321] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424346] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424358] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.778 [2024-07-15 21:42:39.424370] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424382] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424393] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424405] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424417] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424488] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424500] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424547] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424575] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424586] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424599] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424624] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424636] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424648] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424666] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424743] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7b70 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.424825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:48.779 [2024-07-15 21:42:39.424901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed6020 (9): Bad file descriptor 00:20:48.779 [2024-07-15 21:42:39.424935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f8d490 (9): Bad file descriptor 00:20:48.779 [2024-07-15 21:42:39.424986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.779 [2024-07-15 21:42:39.425005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.425019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.779 [2024-07-15 21:42:39.425038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.425053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.779 [2024-07-15 21:42:39.425066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.425080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.779 [2024-07-15 21:42:39.425094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.425111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2b960 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.425162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.779 [2024-07-15 21:42:39.425181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.425195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.779 [2024-07-15 21:42:39.425209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.425222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.779 [2024-07-15 21:42:39.425236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.425250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.779 [2024-07-15 21:42:39.425265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.425277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4ca0 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.425322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.779 [2024-07-15 21:42:39.425341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.425356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.779 [2024-07-15 21:42:39.425370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.425383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.779 [2024-07-15 21:42:39.425396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.425409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.779 [2024-07-15 21:42:39.425423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.425436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8d280 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.425463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2b120 (9): Bad file descriptor 00:20:48.779 [2024-07-15 21:42:39.425492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e34bf0 (9): Bad file descriptor 00:20:48.779 [2024-07-15 21:42:39.425519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd46c0 (9): Bad file descriptor 00:20:48.779 [2024-07-15 21:42:39.425547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e08ad0 (9): Bad file descriptor 00:20:48.779 [2024-07-15 21:42:39.425592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.779 [2024-07-15 21:42:39.425610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.425624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.779 [2024-07-15 21:42:39.425642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.425661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.779 [2024-07-15 21:42:39.425674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.425688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.779 [2024-07-15 21:42:39.425701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.425713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190e610 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.426737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:48.779 [2024-07-15 21:42:39.426767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ed6020 with addr=10.0.0.2, port=4420 00:20:48.779 [2024-07-15 21:42:39.426784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6020 is same with the state(5) to be set 00:20:48.779 [2024-07-15 21:42:39.426873] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:48.779 [2024-07-15 21:42:39.426992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed6020 (9): Bad file descriptor 00:20:48.779 [2024-07-15 21:42:39.427105] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:48.779 [2024-07-15 21:42:39.427189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:48.779 [2024-07-15 21:42:39.427209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:48.779 [2024-07-15 21:42:39.427226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:48.779 [2024-07-15 21:42:39.427322] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:48.779 [2024-07-15 21:42:39.427352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:48.779 [2024-07-15 21:42:39.427416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.427435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.427456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.427471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.427488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.427501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.427517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.427530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.427547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.427561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.427577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.427595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.427611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.427625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.427641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.427653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.427669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.427683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.427698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.427711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.427726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.427739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.427754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.427768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.427783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.427798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.427813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.427826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.427841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.427855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.427870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.427883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.427898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.427912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.427928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.427941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.427960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.427973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.427989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.428002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.428017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.428030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.428045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.428058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.428073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.428086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.428101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.428115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.428130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.428155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.428180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.428193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.428209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.428222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.428245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.428258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.428273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.428286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.428301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.428314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.428329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.428346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.428362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.428375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.428390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.428403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.779 [2024-07-15 21:42:39.428418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.779 [2024-07-15 21:42:39.428431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.428446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.428459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.428474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.428487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.428503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.428516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.428531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.428544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.428559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.428572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.428587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.428600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.428616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.428629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.428644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.428657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.428672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.428685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.428704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.428718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.428733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.428746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.428761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.428774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.428789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.428802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.428818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.428831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.428846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.428859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.428874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.428887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.428902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.428915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.428930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.428944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.428959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.428973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.428988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.429001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.429016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.429029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.429044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.429060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.429076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.429090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.429106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.429119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.429134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.429156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.429172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.429193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.429209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.429222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.429237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.429251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.429268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.429282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.429299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.429312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.429327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f352c0 is same with the state(5) to be set 00:20:48.780 [2024-07-15 21:42:39.429422] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f352c0 was disconnected and freed. reset controller. 00:20:48.780 [2024-07-15 21:42:39.430713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:48.780 [2024-07-15 21:42:39.430767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed4ca0 (9): Bad file descriptor 00:20:48.780 [2024-07-15 21:42:39.431383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:48.780 [2024-07-15 21:42:39.431419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ed4ca0 with addr=10.0.0.2, port=4420 00:20:48.780 [2024-07-15 21:42:39.431437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4ca0 is same with the state(5) to be set 00:20:48.780 [2024-07-15 21:42:39.431521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed4ca0 (9): Bad file descriptor 00:20:48.780 [2024-07-15 21:42:39.431598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:48.780 [2024-07-15 21:42:39.431625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:48.780 [2024-07-15 21:42:39.431642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:48.780 [2024-07-15 21:42:39.431708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:48.780 [2024-07-15 21:42:39.434888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2b960 (9): Bad file descriptor 00:20:48.780 [2024-07-15 21:42:39.434964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f8d280 (9): Bad file descriptor 00:20:48.780 [2024-07-15 21:42:39.435021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190e610 (9): Bad file descriptor 00:20:48.780 [2024-07-15 21:42:39.435195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.435973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.435989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.436003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.436019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.436032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.436047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.436060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.436076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.436089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.436105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.436118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.436133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.436153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.436169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.436182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.436197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.436210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.436225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.436239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.436254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.436268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.436287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.436301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.780 [2024-07-15 21:42:39.436317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.780 [2024-07-15 21:42:39.436330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.436346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.436359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.436375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.436389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.436405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.436418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.436433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.436447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.436465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.436479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.436494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.436507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.436522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.436536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.436552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.436565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.436582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.436595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.436611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.436625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.436641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.436658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.436674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.436687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.436702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.436716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.436739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.436752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.436768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.436781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.436797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.436810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.436825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.436838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.436853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.436866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.436887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.436900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.436915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.436928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.436943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.436956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.436971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.436984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.436999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.437012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.437031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.437045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.437061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.437074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.437089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.437102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.437118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.437131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.437152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3d290 is same with the state(5) to be set 00:20:48.781 [2024-07-15 21:42:39.438455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.438481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.438506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.438522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.438537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.438551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.438566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.438580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.438596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.438609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.438625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.438638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.438654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.438668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.438684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.438696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.438724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.438739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.438754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.438768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.438783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.438795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.438811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.438825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.438840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.438854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.438869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.438882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.438897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.438910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.438925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.438938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.438953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.438966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.438982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.438995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.439009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.439022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.439038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.439051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.439066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.439082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.439098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.439111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.439126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.439147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.439164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.439177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.439192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.439205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.439221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.439234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.439249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.439262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.439277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.439290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.439305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.439318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.439333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.439347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.439362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.439375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.439390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.439403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.439418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.439431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.439446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.439462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.439478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.439491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.439506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.439518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.439534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.439547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.439562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.439574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.781 [2024-07-15 21:42:39.439589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.781 [2024-07-15 21:42:39.439602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.439617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.439630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.439646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.439658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.439673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.439686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.439701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.439714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.439729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.439742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.439757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.439770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.439785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.439798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.439816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.439830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.439845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.439858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.439873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.439886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.439901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.439914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.439929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.439942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.439957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.439970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.439985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.439998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.440013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.440026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.440041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.440054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.440069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.440083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.440098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.440111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.440126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.440146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.440163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.440180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.440196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.440209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.440224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.440237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.440252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.440265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.440281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.440294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.440309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.440322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.440336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb7ad0 is same with the state(5) to be set 00:20:48.782 [2024-07-15 21:42:39.441631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.441666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.441691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.441706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.441722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.441736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.441751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.441764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.441779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.441792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.441807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.441820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.441836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.441856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.441871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.441884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.441899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.441912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.441927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.441940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.441956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.441969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.441985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.441999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.442984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.442997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.782 [2024-07-15 21:42:39.443012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.782 [2024-07-15 21:42:39.443025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.443040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.443054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.443074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.443087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.443102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.443115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.443130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.443149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.443165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.443178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.443193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.443206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.443221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.443234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.443249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.443262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.443277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.443290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.443305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.443322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.443338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.443351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.443366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.443380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.443395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.443408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.443423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.443436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.443451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.443464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.443479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.443492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.443507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.443520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.443536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8f60 is same with the state(5) to be set 00:20:48.783 [2024-07-15 21:42:39.444842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.444874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.444898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.444913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.444928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.444942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.444957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.444970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.444985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.445982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.445996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.446011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.446024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.446039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.446052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.446067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.446080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.446099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.446114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.446130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.446150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.446168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.446182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.446197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.446210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.783 [2024-07-15 21:42:39.446225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.783 [2024-07-15 21:42:39.446238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.446254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.446267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.446282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.446295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.446310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.446324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.446340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.446353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.446368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.446381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.446402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.446415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.446430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.446443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.446458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.446474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.446490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.446504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.446519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.446532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.446547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.446560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.446576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.446589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.446604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.446618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.446633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.446646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.446662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.446675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.446690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.446703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.446719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.446732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.446746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e02a30 is same with the state(5) to be set 00:20:48.784 [2024-07-15 21:42:39.448081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.448973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.448988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.784 [2024-07-15 21:42:39.449734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.784 [2024-07-15 21:42:39.449749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.449762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.449777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.449791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.449806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.449819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.449834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.449847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.449862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.449875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.449890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.449903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.449918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.449931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.449946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.449959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.449973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36790 is same with the state(5) to be set 00:20:48.785 [2024-07-15 21:42:39.451856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:48.785 [2024-07-15 21:42:39.451901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:48.785 [2024-07-15 21:42:39.451927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:48.785 [2024-07-15 21:42:39.451944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:48.785 [2024-07-15 21:42:39.452060] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:48.785 [2024-07-15 21:42:39.452221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:48.785 [2024-07-15 21:42:39.452436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:48.785 [2024-07-15 21:42:39.452466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e08ad0 with addr=10.0.0.2, port=4420 00:20:48.785 [2024-07-15 21:42:39.452484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e08ad0 is same with the state(5) to be set 00:20:48.785 [2024-07-15 21:42:39.452575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:48.785 [2024-07-15 21:42:39.452598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd46c0 with addr=10.0.0.2, port=4420 00:20:48.785 [2024-07-15 21:42:39.452613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd46c0 is same with the state(5) to be set 00:20:48.785 [2024-07-15 21:42:39.452727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:48.785 [2024-07-15 21:42:39.452749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e34bf0 with addr=10.0.0.2, port=4420 00:20:48.785 [2024-07-15 21:42:39.452764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e34bf0 is same with the state(5) to be set 00:20:48.785 [2024-07-15 21:42:39.452842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:48.785 [2024-07-15 21:42:39.452864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e2b120 with addr=10.0.0.2, port=4420 00:20:48.785 [2024-07-15 21:42:39.452879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2b120 is same with the state(5) to be set 00:20:48.785 [2024-07-15 21:42:39.454011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.454977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.454990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.785 [2024-07-15 21:42:39.455700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.785 [2024-07-15 21:42:39.455713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.455728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.455741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.455756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.455769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.455784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.455797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.455812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.455825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.455841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.455854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.455869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.455882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.455897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03ec0 is same with the state(5) to be set 00:20:48.786 [2024-07-15 21:42:39.457213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.457975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.457990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.786 [2024-07-15 21:42:39.458892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.786 [2024-07-15 21:42:39.458905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.458920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.458933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.458948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.458961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.458976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.458989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.459005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.459018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.459033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.459047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.459062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.459076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.459090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2736490 is same with the state(5) to be set 00:20:48.787 [2024-07-15 21:42:39.460409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.460439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.460467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.460481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.460497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.460517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.460533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.460547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.460562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.460575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.460590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.460603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.460619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.460633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.460648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.460661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.460676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.460689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.460705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.460718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.460738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.460751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.460767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.460780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.460795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.460808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.460823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.460836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.460852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.460865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.460884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.460897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.460912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.460926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.460941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.460954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.460970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.460983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.460998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.461979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.461995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.462008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.462023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.462036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.462051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.462064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.462079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.462092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.462107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.462121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.462136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.462159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.462174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.462188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.462203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.462216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.462231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.462245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.462261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.462274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.462289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.787 [2024-07-15 21:42:39.462302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.787 [2024-07-15 21:42:39.462316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28ddf30 is same with the state(5) to be set 00:20:48.787 [2024-07-15 21:42:39.464204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:48.787 [2024-07-15 21:42:39.464265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:48.787 [2024-07-15 21:42:39.464284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:48.787 [2024-07-15 21:42:39.464302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:48.787 task offset: 31872 on job bdev=Nvme6n1 fails 00:20:48.787 00:20:48.787 Latency(us) 00:20:48.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.787 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.788 Job: Nvme1n1 ended in about 1.01 seconds with error 00:20:48.788 Verification LBA range: start 0x0 length 0x400 00:20:48.788 Nvme1n1 : 1.01 126.30 7.89 63.15 0.00 334259.90 37282.70 268746.15 00:20:48.788 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.788 Job: Nvme2n1 ended in about 1.02 seconds with error 00:20:48.788 Verification LBA range: start 0x0 length 0x400 00:20:48.788 Nvme2n1 : 1.02 125.91 7.87 62.96 0.00 328919.67 21068.61 316902.97 00:20:48.788 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.788 Job: Nvme3n1 ended in about 1.02 seconds with error 00:20:48.788 Verification LBA range: start 0x0 length 0x400 00:20:48.788 Nvme3n1 : 1.02 188.27 11.77 62.76 0.00 242564.55 20971.52 273406.48 00:20:48.788 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.788 Job: Nvme4n1 ended in about 1.02 seconds with error 00:20:48.788 Verification LBA range: start 0x0 length 0x400 00:20:48.788 Nvme4n1 : 1.02 130.01 8.13 62.56 0.00 310144.84 21554.06 279620.27 00:20:48.788 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.788 Job: Nvme5n1 ended in about 1.03 seconds with error 00:20:48.788 Verification LBA range: start 0x0 length 0x400 00:20:48.788 Nvme5n1 : 1.03 186.02 11.63 62.01 0.00 235990.47 20388.98 265639.25 00:20:48.788 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.788 Job: Nvme6n1 ended in about 1.00 seconds with error 00:20:48.788 Verification LBA range: start 0x0 length 0x400 00:20:48.788 Nvme6n1 : 1.00 192.04 12.00 64.01 0.00 223079.35 4708.88 278066.82 00:20:48.788 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.788 Job: Nvme7n1 ended in about 1.04 seconds with error 00:20:48.788 Verification LBA range: start 0x0 length 0x400 00:20:48.788 Nvme7n1 : 1.04 185.45 11.59 61.82 0.00 227290.83 19029.71 250104.79 00:20:48.788 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.788 Job: Nvme8n1 ended in about 1.04 seconds with error 00:20:48.788 Verification LBA range: start 0x0 length 0x400 00:20:48.788 Nvme8n1 : 1.04 184.88 11.55 61.63 0.00 223371.95 18155.90 267192.70 00:20:48.788 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.788 Job: Nvme9n1 ended in about 1.01 seconds with error 00:20:48.788 Verification LBA range: start 0x0 length 0x400 00:20:48.788 Nvme9n1 : 1.01 131.24 8.20 63.63 0.00 275045.63 22136.60 281173.71 00:20:48.788 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.788 Job: Nvme10n1 ended in about 1.03 seconds with error 00:20:48.788 Verification LBA range: start 0x0 length 0x400 00:20:48.788 Nvme10n1 : 1.03 124.73 7.80 62.36 0.00 281390.65 22330.79 274959.93 00:20:48.788 =================================================================================================================== 00:20:48.788 Total : 1574.85 98.43 626.88 0.00 262940.43 4708.88 316902.97 00:20:48.788 [2024-07-15 21:42:39.489196] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:48.788 [2024-07-15 21:42:39.489284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:48.788 [2024-07-15 21:42:39.489580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:48.788 [2024-07-15 21:42:39.489622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f8d490 with addr=10.0.0.2, port=4420 00:20:48.788 [2024-07-15 21:42:39.489654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8d490 is same with the state(5) to be set 00:20:48.788 [2024-07-15 21:42:39.489680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e08ad0 (9): Bad file descriptor 00:20:48.788 [2024-07-15 21:42:39.489702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd46c0 (9): Bad file descriptor 00:20:48.788 [2024-07-15 21:42:39.489718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e34bf0 (9): Bad file descriptor 00:20:48.788 [2024-07-15 21:42:39.489734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2b120 (9): Bad file descriptor 00:20:48.788 [2024-07-15 21:42:39.489787] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:48.788 [2024-07-15 21:42:39.489809] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:48.788 [2024-07-15 21:42:39.489826] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:48.788 [2024-07-15 21:42:39.489842] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:48.788 [2024-07-15 21:42:39.489859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f8d490 (9): Bad file descriptor 00:20:48.788 [2024-07-15 21:42:39.490184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:48.788 [2024-07-15 21:42:39.490223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ed6020 with addr=10.0.0.2, port=4420 00:20:48.788 [2024-07-15 21:42:39.490239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6020 is same with the state(5) to be set 00:20:48.788 [2024-07-15 21:42:39.490332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:48.788 [2024-07-15 21:42:39.490354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ed4ca0 with addr=10.0.0.2, port=4420 00:20:48.788 [2024-07-15 21:42:39.490368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed4ca0 is same with the state(5) to be set 00:20:48.788 [2024-07-15 21:42:39.490452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:48.788 [2024-07-15 21:42:39.490476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e2b960 with addr=10.0.0.2, port=4420 00:20:48.788 [2024-07-15 21:42:39.490491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2b960 is same with the state(5) to be set 00:20:48.788 [2024-07-15 21:42:39.490587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:48.788 [2024-07-15 21:42:39.490609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190e610 with addr=10.0.0.2, port=4420 00:20:48.788 [2024-07-15 21:42:39.490624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190e610 is same with the state(5) to be set 00:20:48.788 [2024-07-15 21:42:39.490689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:48.788 [2024-07-15 21:42:39.490710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f8d280 with addr=10.0.0.2, port=4420 00:20:48.788 [2024-07-15 21:42:39.490724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8d280 is same with the state(5) to be set 00:20:48.788 [2024-07-15 21:42:39.490741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:48.788 [2024-07-15 21:42:39.490754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:48.788 [2024-07-15 21:42:39.490769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:48.788 [2024-07-15 21:42:39.490794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:48.788 [2024-07-15 21:42:39.490808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:48.788 [2024-07-15 21:42:39.490821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:48.788 [2024-07-15 21:42:39.490839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:48.788 [2024-07-15 21:42:39.490852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:48.788 [2024-07-15 21:42:39.490864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:48.788 [2024-07-15 21:42:39.490881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:48.788 [2024-07-15 21:42:39.490893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:48.788 [2024-07-15 21:42:39.490906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:48.788 [2024-07-15 21:42:39.490937] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:48.788 [2024-07-15 21:42:39.490957] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:48.788 [2024-07-15 21:42:39.490974] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:48.788 [2024-07-15 21:42:39.490991] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:48.788 [2024-07-15 21:42:39.491007] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:48.788 [2024-07-15 21:42:39.491908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:48.788 [2024-07-15 21:42:39.491944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:48.788 [2024-07-15 21:42:39.491968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:48.788 [2024-07-15 21:42:39.491980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:48.788 [2024-07-15 21:42:39.492005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed6020 (9): Bad file descriptor 00:20:48.788 [2024-07-15 21:42:39.492030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed4ca0 (9): Bad file descriptor 00:20:48.788 [2024-07-15 21:42:39.492047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2b960 (9): Bad file descriptor 00:20:48.788 [2024-07-15 21:42:39.492064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190e610 (9): Bad file descriptor 00:20:48.788 [2024-07-15 21:42:39.492080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f8d280 (9): Bad file descriptor 00:20:48.788 [2024-07-15 21:42:39.492095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:48.788 [2024-07-15 21:42:39.492107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:48.788 [2024-07-15 21:42:39.492119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:48.788 [2024-07-15 21:42:39.492478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:48.788 [2024-07-15 21:42:39.492503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:48.788 [2024-07-15 21:42:39.492529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:48.788 [2024-07-15 21:42:39.492542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:48.788 [2024-07-15 21:42:39.492567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:48.788 [2024-07-15 21:42:39.492580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:48.788 [2024-07-15 21:42:39.492592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:48.788 [2024-07-15 21:42:39.492609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:48.788 [2024-07-15 21:42:39.492621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:48.788 [2024-07-15 21:42:39.492633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:48.788 [2024-07-15 21:42:39.492649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:48.788 [2024-07-15 21:42:39.492662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:48.788 [2024-07-15 21:42:39.492674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:48.788 [2024-07-15 21:42:39.492690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:48.788 [2024-07-15 21:42:39.492702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:48.788 [2024-07-15 21:42:39.492714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:48.788 [2024-07-15 21:42:39.492785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:48.788 [2024-07-15 21:42:39.492803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:48.788 [2024-07-15 21:42:39.492814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:48.788 [2024-07-15 21:42:39.492825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:48.788 [2024-07-15 21:42:39.492836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:49.063 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:49.063 21:42:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 390941 00:20:50.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (390941) - No such process 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:50.441 rmmod nvme_tcp 00:20:50.441 rmmod nvme_fabrics 00:20:50.441 rmmod nvme_keyring 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:50.441 21:42:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.350 21:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:52.350 00:20:52.350 real 0m7.299s 00:20:52.350 user 0m17.629s 00:20:52.350 sys 0m1.358s 00:20:52.350 21:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:52.350 21:42:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:52.350 ************************************ 00:20:52.350 END TEST nvmf_shutdown_tc3 00:20:52.350 ************************************ 00:20:52.350 21:42:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:52.350 21:42:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:52.350 00:20:52.350 real 0m26.189s 00:20:52.350 user 1m13.415s 00:20:52.350 sys 0m5.683s 00:20:52.350 21:42:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:52.350 21:42:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:52.350 ************************************ 00:20:52.350 END TEST nvmf_shutdown 00:20:52.350 ************************************ 00:20:52.350 21:42:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:52.350 21:42:42 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:52.350 21:42:42 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:52.350 21:42:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:52.350 21:42:42 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:52.350 21:42:42 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:52.350 21:42:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:52.350 21:42:42 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:52.350 21:42:42 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:52.350 21:42:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:52.350 21:42:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:52.350 21:42:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:52.350 ************************************ 00:20:52.350 START TEST nvmf_multicontroller 00:20:52.350 ************************************ 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:52.350 * Looking for test storage... 00:20:52.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:52.350 21:42:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:52.351 21:42:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:52.351 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:52.351 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:52.351 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:52.351 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:52.351 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:52.351 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.351 21:42:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.351 21:42:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.351 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:52.351 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:52.351 21:42:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:52.351 21:42:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:54.281 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:54.281 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:54.281 Found net devices under 0000:08:00.0: cvl_0_0 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:54.281 Found net devices under 0000:08:00.1: cvl_0_1 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:54.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:54.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:20:54.281 00:20:54.281 --- 10.0.0.2 ping statistics --- 00:20:54.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.281 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:54.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:54.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:20:54.281 00:20:54.281 --- 10.0.0.1 ping statistics --- 00:20:54.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.281 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=392809 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 392809 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 392809 ']' 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:54.281 21:42:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.281 [2024-07-15 21:42:44.951501] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:20:54.281 [2024-07-15 21:42:44.951601] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.281 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.281 [2024-07-15 21:42:45.017427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:54.538 [2024-07-15 21:42:45.137078] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.538 [2024-07-15 21:42:45.137143] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.538 [2024-07-15 21:42:45.137161] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.538 [2024-07-15 21:42:45.137174] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.538 [2024-07-15 21:42:45.137186] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.538 [2024-07-15 21:42:45.137277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.538 [2024-07-15 21:42:45.141164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:54.538 [2024-07-15 21:42:45.141200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.538 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:54.538 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:54.539 21:42:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:54.539 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:54.539 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.539 21:42:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.539 21:42:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:54.539 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.539 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.539 [2024-07-15 21:42:45.285969] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.539 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.539 21:42:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:54.539 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.539 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.796 Malloc0 00:20:54.796 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.796 21:42:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:54.796 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.796 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.796 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.796 21:42:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:54.796 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.796 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.796 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.796 21:42:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:54.796 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.796 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.796 [2024-07-15 21:42:45.352299] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.796 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.796 21:42:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:54.796 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.796 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.796 [2024-07-15 21:42:45.360215] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:54.796 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.796 21:42:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:54.796 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.796 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.796 Malloc1 00:20:54.796 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=392919 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 392919 /var/tmp/bdevperf.sock 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 392919 ']' 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:54.797 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:55.055 NVMe0n1 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.055 1 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:55.055 request: 00:20:55.055 { 00:20:55.055 "name": "NVMe0", 00:20:55.055 "trtype": "tcp", 00:20:55.055 "traddr": "10.0.0.2", 00:20:55.055 "adrfam": "ipv4", 00:20:55.055 "trsvcid": "4420", 00:20:55.055 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.055 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:55.055 "hostaddr": "10.0.0.2", 00:20:55.055 "hostsvcid": "60000", 00:20:55.055 "prchk_reftag": false, 00:20:55.055 "prchk_guard": false, 00:20:55.055 "hdgst": false, 00:20:55.055 "ddgst": false, 00:20:55.055 "method": "bdev_nvme_attach_controller", 00:20:55.055 "req_id": 1 00:20:55.055 } 00:20:55.055 Got JSON-RPC error response 00:20:55.055 response: 00:20:55.055 { 00:20:55.055 "code": -114, 00:20:55.055 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:55.055 } 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.055 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:55.055 request: 00:20:55.055 { 00:20:55.055 "name": "NVMe0", 00:20:55.055 "trtype": "tcp", 00:20:55.055 "traddr": "10.0.0.2", 00:20:55.055 "adrfam": "ipv4", 00:20:55.055 "trsvcid": "4420", 00:20:55.055 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:55.055 "hostaddr": "10.0.0.2", 00:20:55.055 "hostsvcid": "60000", 00:20:55.055 "prchk_reftag": false, 00:20:55.055 "prchk_guard": false, 00:20:55.055 "hdgst": false, 00:20:55.055 "ddgst": false, 00:20:55.055 "method": "bdev_nvme_attach_controller", 00:20:55.055 "req_id": 1 00:20:55.055 } 00:20:55.055 Got JSON-RPC error response 00:20:55.055 response: 00:20:55.055 { 00:20:55.055 "code": -114, 00:20:55.055 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:55.055 } 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:55.056 request: 00:20:55.056 { 00:20:55.056 "name": "NVMe0", 00:20:55.056 "trtype": "tcp", 00:20:55.056 "traddr": "10.0.0.2", 00:20:55.056 "adrfam": "ipv4", 00:20:55.056 "trsvcid": "4420", 00:20:55.056 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.056 "hostaddr": "10.0.0.2", 00:20:55.056 "hostsvcid": "60000", 00:20:55.056 "prchk_reftag": false, 00:20:55.056 "prchk_guard": false, 00:20:55.056 "hdgst": false, 00:20:55.056 "ddgst": false, 00:20:55.056 "multipath": "disable", 00:20:55.056 "method": "bdev_nvme_attach_controller", 00:20:55.056 "req_id": 1 00:20:55.056 } 00:20:55.056 Got JSON-RPC error response 00:20:55.056 response: 00:20:55.056 { 00:20:55.056 "code": -114, 00:20:55.056 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:55.056 } 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:55.056 request: 00:20:55.056 { 00:20:55.056 "name": "NVMe0", 00:20:55.056 "trtype": "tcp", 00:20:55.056 "traddr": "10.0.0.2", 00:20:55.056 "adrfam": "ipv4", 00:20:55.056 "trsvcid": "4420", 00:20:55.056 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.056 "hostaddr": "10.0.0.2", 00:20:55.056 "hostsvcid": "60000", 00:20:55.056 "prchk_reftag": false, 00:20:55.056 "prchk_guard": false, 00:20:55.056 "hdgst": false, 00:20:55.056 "ddgst": false, 00:20:55.056 "multipath": "failover", 00:20:55.056 "method": "bdev_nvme_attach_controller", 00:20:55.056 "req_id": 1 00:20:55.056 } 00:20:55.056 Got JSON-RPC error response 00:20:55.056 response: 00:20:55.056 { 00:20:55.056 "code": -114, 00:20:55.056 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:55.056 } 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:55.056 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:55.313 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:55.313 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:55.313 21:42:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:55.313 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.313 21:42:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:55.313 00:20:55.313 21:42:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.313 21:42:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:55.313 21:42:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.313 21:42:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:55.313 21:42:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.313 21:42:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:55.313 21:42:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.313 21:42:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:55.570 00:20:55.570 21:42:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.570 21:42:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:55.570 21:42:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:55.570 21:42:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.570 21:42:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:55.570 21:42:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.570 21:42:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:55.570 21:42:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:56.938 0 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 392919 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 392919 ']' 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 392919 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 392919 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 392919' 00:20:56.938 killing process with pid 392919 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 392919 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 392919 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:20:56.938 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:56.938 [2024-07-15 21:42:45.465244] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:20:56.938 [2024-07-15 21:42:45.465352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid392919 ] 00:20:56.938 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.938 [2024-07-15 21:42:45.523623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.938 [2024-07-15 21:42:45.622944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.938 [2024-07-15 21:42:46.269587] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 2f8aa8f5-d43b-4a8b-a7f0-c7ef365ec0cf already exists 00:20:56.938 [2024-07-15 21:42:46.269623] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:2f8aa8f5-d43b-4a8b-a7f0-c7ef365ec0cf alias for bdev NVMe1n1 00:20:56.938 [2024-07-15 21:42:46.269637] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:56.938 Running I/O for 1 seconds... 00:20:56.938 00:20:56.938 Latency(us) 00:20:56.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.938 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:56.938 NVMe0n1 : 1.01 18688.37 73.00 0.00 0.00 6838.24 5898.24 17185.00 00:20:56.938 =================================================================================================================== 00:20:56.938 Total : 18688.37 73.00 0.00 0.00 6838.24 5898.24 17185.00 00:20:56.938 Received shutdown signal, test time was about 1.000000 seconds 00:20:56.938 00:20:56.938 Latency(us) 00:20:56.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.938 =================================================================================================================== 00:20:56.938 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:56.938 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:56.938 21:42:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:56.938 rmmod nvme_tcp 00:20:56.938 rmmod nvme_fabrics 00:20:56.938 rmmod nvme_keyring 00:20:57.196 21:42:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:57.196 21:42:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:57.196 21:42:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:57.196 21:42:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 392809 ']' 00:20:57.196 21:42:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 392809 00:20:57.196 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 392809 ']' 00:20:57.196 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 392809 00:20:57.196 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:57.196 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:57.196 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 392809 00:20:57.196 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:57.196 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:57.196 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 392809' 00:20:57.196 killing process with pid 392809 00:20:57.196 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 392809 00:20:57.196 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 392809 00:20:57.196 21:42:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:57.196 21:42:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:57.196 21:42:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:57.196 21:42:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:57.196 21:42:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:57.196 21:42:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.196 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:57.196 21:42:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.726 21:42:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:59.726 00:20:59.726 real 0m7.010s 00:20:59.726 user 0m11.501s 00:20:59.726 sys 0m1.954s 00:20:59.726 21:42:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:59.726 21:42:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.726 ************************************ 00:20:59.726 END TEST nvmf_multicontroller 00:20:59.726 ************************************ 00:20:59.726 21:42:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:59.726 21:42:50 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:59.726 21:42:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:59.726 21:42:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:59.726 21:42:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:59.726 ************************************ 00:20:59.726 START TEST nvmf_aer 00:20:59.726 ************************************ 00:20:59.726 21:42:50 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:59.726 * Looking for test storage... 00:20:59.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:59.726 21:42:50 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:59.726 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:59.726 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:59.726 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:59.726 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:59.726 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:59.726 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:59.726 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:59.727 21:42:50 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:21:01.118 Found 0000:08:00.0 (0x8086 - 0x159b) 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:21:01.118 Found 0000:08:00.1 (0x8086 - 0x159b) 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.118 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:21:01.119 Found net devices under 0000:08:00.0: cvl_0_0 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:21:01.119 Found net devices under 0000:08:00.1: cvl_0_1 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:01.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:01.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:21:01.119 00:21:01.119 --- 10.0.0.2 ping statistics --- 00:21:01.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.119 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:01.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:01.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:21:01.119 00:21:01.119 --- 10.0.0.1 ping statistics --- 00:21:01.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.119 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=394625 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 394625 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 394625 ']' 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.119 21:42:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:01.376 21:42:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.376 21:42:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:01.376 21:42:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.376 [2024-07-15 21:42:51.960163] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:21:01.376 [2024-07-15 21:42:51.960265] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.376 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.376 [2024-07-15 21:42:52.026279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:01.377 [2024-07-15 21:42:52.147030] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.377 [2024-07-15 21:42:52.147088] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.377 [2024-07-15 21:42:52.147104] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.377 [2024-07-15 21:42:52.147117] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.377 [2024-07-15 21:42:52.147129] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.377 [2024-07-15 21:42:52.147220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.377 [2024-07-15 21:42:52.147311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.377 [2024-07-15 21:42:52.151157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:01.377 [2024-07-15 21:42:52.151175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.634 [2024-07-15 21:42:52.306912] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.634 Malloc0 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.634 [2024-07-15 21:42:52.357205] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.634 [ 00:21:01.634 { 00:21:01.634 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:01.634 "subtype": "Discovery", 00:21:01.634 "listen_addresses": [], 00:21:01.634 "allow_any_host": true, 00:21:01.634 "hosts": [] 00:21:01.634 }, 00:21:01.634 { 00:21:01.634 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.634 "subtype": "NVMe", 00:21:01.634 "listen_addresses": [ 00:21:01.634 { 00:21:01.634 "trtype": "TCP", 00:21:01.634 "adrfam": "IPv4", 00:21:01.634 "traddr": "10.0.0.2", 00:21:01.634 "trsvcid": "4420" 00:21:01.634 } 00:21:01.634 ], 00:21:01.634 "allow_any_host": true, 00:21:01.634 "hosts": [], 00:21:01.634 "serial_number": "SPDK00000000000001", 00:21:01.634 "model_number": "SPDK bdev Controller", 00:21:01.634 "max_namespaces": 2, 00:21:01.634 "min_cntlid": 1, 00:21:01.634 "max_cntlid": 65519, 00:21:01.634 "namespaces": [ 00:21:01.634 { 00:21:01.634 "nsid": 1, 00:21:01.634 "bdev_name": "Malloc0", 00:21:01.634 "name": "Malloc0", 00:21:01.634 "nguid": "4DB1759C1AF2411B9554B45D054A6FE0", 00:21:01.634 "uuid": "4db1759c-1af2-411b-9554-b45d054a6fe0" 00:21:01.634 } 00:21:01.634 ] 00:21:01.634 } 00:21:01.634 ] 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=394652 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:21:01.634 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:01.892 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.892 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:01.892 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:21:01.892 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:21:01.892 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:01.892 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:01.892 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:21:01.892 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:21:01.892 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:01.892 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:01.892 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:01.892 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:21:01.892 21:42:52 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:01.892 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.892 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:02.150 Malloc1 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:02.150 [ 00:21:02.150 { 00:21:02.150 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:02.150 "subtype": "Discovery", 00:21:02.150 "listen_addresses": [], 00:21:02.150 "allow_any_host": true, 00:21:02.150 "hosts": [] 00:21:02.150 }, 00:21:02.150 { 00:21:02.150 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.150 "subtype": "NVMe", 00:21:02.150 "listen_addresses": [ 00:21:02.150 { 00:21:02.150 "trtype": "TCP", 00:21:02.150 "adrfam": "IPv4", 00:21:02.150 "traddr": "10.0.0.2", 00:21:02.150 "trsvcid": "4420" 00:21:02.150 } 00:21:02.150 ], 00:21:02.150 "allow_any_host": true, 00:21:02.150 "hosts": [], 00:21:02.150 "serial_number": "SPDK00000000000001", 00:21:02.150 "model_number": "SPDK bdev Controller", 00:21:02.150 "max_namespaces": 2, 00:21:02.150 "min_cntlid": 1, 00:21:02.150 "max_cntlid": 65519, 00:21:02.150 "namespaces": [ 00:21:02.150 { 00:21:02.150 "nsid": 1, 00:21:02.150 "bdev_name": "Malloc0", 00:21:02.150 "name": "Malloc0", 00:21:02.150 "nguid": "4DB1759C1AF2411B9554B45D054A6FE0", 00:21:02.150 "uuid": "4db1759c-1af2-411b-9554-b45d054a6fe0" 00:21:02.150 }, 00:21:02.150 { 00:21:02.150 "nsid": 2, 00:21:02.150 "bdev_name": "Malloc1", 00:21:02.150 "name": "Malloc1", 00:21:02.150 "nguid": "DEBC97AC9F8749D99BDA264E25575ABC", 00:21:02.150 "uuid": "debc97ac-9f87-49d9-9bda-264e25575abc" 00:21:02.150 } 00:21:02.150 ] 00:21:02.150 } 00:21:02.150 ] 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 394652 00:21:02.150 Asynchronous Event Request test 00:21:02.150 Attaching to 10.0.0.2 00:21:02.150 Attached to 10.0.0.2 00:21:02.150 Registering asynchronous event callbacks... 00:21:02.150 Starting namespace attribute notice tests for all controllers... 00:21:02.150 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:02.150 aer_cb - Changed Namespace 00:21:02.150 Cleaning up... 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:02.150 rmmod nvme_tcp 00:21:02.150 rmmod nvme_fabrics 00:21:02.150 rmmod nvme_keyring 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 394625 ']' 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 394625 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 394625 ']' 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 394625 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 394625 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 394625' 00:21:02.150 killing process with pid 394625 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 394625 00:21:02.150 21:42:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 394625 00:21:02.409 21:42:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:02.409 21:42:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:02.409 21:42:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:02.409 21:42:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:02.409 21:42:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:02.409 21:42:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.409 21:42:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:02.409 21:42:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.938 21:42:55 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:04.938 00:21:04.938 real 0m5.070s 00:21:04.938 user 0m4.341s 00:21:04.938 sys 0m1.633s 00:21:04.938 21:42:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:04.938 21:42:55 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:04.938 ************************************ 00:21:04.938 END TEST nvmf_aer 00:21:04.938 ************************************ 00:21:04.938 21:42:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:04.938 21:42:55 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:04.938 21:42:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:04.938 21:42:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:04.938 21:42:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:04.938 ************************************ 00:21:04.938 START TEST nvmf_async_init 00:21:04.938 ************************************ 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:04.938 * Looking for test storage... 00:21:04.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=b3e70c5db0994f6e9ec95dd961a0948f 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:21:04.938 21:42:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:06.314 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:21:06.314 Found 0000:08:00.0 (0x8086 - 0x159b) 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:21:06.315 Found 0000:08:00.1 (0x8086 - 0x159b) 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:21:06.315 Found net devices under 0000:08:00.0: cvl_0_0 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:21:06.315 Found net devices under 0000:08:00.1: cvl_0_1 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:06.315 21:42:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:06.315 21:42:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:06.315 21:42:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:06.315 21:42:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:06.315 21:42:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:06.315 21:42:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:06.315 21:42:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:06.315 21:42:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:06.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:21:06.315 00:21:06.315 --- 10.0.0.2 ping statistics --- 00:21:06.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.315 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:21:06.315 21:42:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:06.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:21:06.315 00:21:06.315 --- 10.0.0.1 ping statistics --- 00:21:06.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.315 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:21:06.315 21:42:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.315 21:42:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:21:06.315 21:42:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:06.315 21:42:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:06.315 21:42:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:06.315 21:42:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:06.315 21:42:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:06.315 21:42:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:06.315 21:42:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:06.572 21:42:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:06.572 21:42:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:06.572 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:06.572 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:06.572 21:42:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=396157 00:21:06.572 21:42:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:06.572 21:42:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 396157 00:21:06.572 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 396157 ']' 00:21:06.572 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.572 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:06.572 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.572 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:06.572 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:06.572 [2024-07-15 21:42:57.179645] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:21:06.572 [2024-07-15 21:42:57.179732] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.572 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.572 [2024-07-15 21:42:57.243973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.572 [2024-07-15 21:42:57.359226] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.572 [2024-07-15 21:42:57.359285] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.572 [2024-07-15 21:42:57.359301] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.572 [2024-07-15 21:42:57.359315] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.572 [2024-07-15 21:42:57.359327] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.572 [2024-07-15 21:42:57.359355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:06.830 [2024-07-15 21:42:57.495387] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:06.830 null0 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b3e70c5db0994f6e9ec95dd961a0948f 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:06.830 [2024-07-15 21:42:57.535581] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.830 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.087 nvme0n1 00:21:07.087 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.087 21:42:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:07.087 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.087 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.087 [ 00:21:07.087 { 00:21:07.087 "name": "nvme0n1", 00:21:07.087 "aliases": [ 00:21:07.087 "b3e70c5d-b099-4f6e-9ec9-5dd961a0948f" 00:21:07.087 ], 00:21:07.087 "product_name": "NVMe disk", 00:21:07.087 "block_size": 512, 00:21:07.087 "num_blocks": 2097152, 00:21:07.087 "uuid": "b3e70c5d-b099-4f6e-9ec9-5dd961a0948f", 00:21:07.087 "assigned_rate_limits": { 00:21:07.087 "rw_ios_per_sec": 0, 00:21:07.087 "rw_mbytes_per_sec": 0, 00:21:07.087 "r_mbytes_per_sec": 0, 00:21:07.087 "w_mbytes_per_sec": 0 00:21:07.087 }, 00:21:07.087 "claimed": false, 00:21:07.087 "zoned": false, 00:21:07.087 "supported_io_types": { 00:21:07.087 "read": true, 00:21:07.087 "write": true, 00:21:07.087 "unmap": false, 00:21:07.087 "flush": true, 00:21:07.087 "reset": true, 00:21:07.087 "nvme_admin": true, 00:21:07.087 "nvme_io": true, 00:21:07.087 "nvme_io_md": false, 00:21:07.087 "write_zeroes": true, 00:21:07.087 "zcopy": false, 00:21:07.087 "get_zone_info": false, 00:21:07.087 "zone_management": false, 00:21:07.087 "zone_append": false, 00:21:07.087 "compare": true, 00:21:07.087 "compare_and_write": true, 00:21:07.087 "abort": true, 00:21:07.087 "seek_hole": false, 00:21:07.087 "seek_data": false, 00:21:07.087 "copy": true, 00:21:07.087 "nvme_iov_md": false 00:21:07.087 }, 00:21:07.087 "memory_domains": [ 00:21:07.087 { 00:21:07.087 "dma_device_id": "system", 00:21:07.087 "dma_device_type": 1 00:21:07.087 } 00:21:07.087 ], 00:21:07.087 "driver_specific": { 00:21:07.087 "nvme": [ 00:21:07.087 { 00:21:07.087 "trid": { 00:21:07.087 "trtype": "TCP", 00:21:07.087 "adrfam": "IPv4", 00:21:07.087 "traddr": "10.0.0.2", 00:21:07.087 "trsvcid": "4420", 00:21:07.087 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:07.087 }, 00:21:07.087 "ctrlr_data": { 00:21:07.087 "cntlid": 1, 00:21:07.087 "vendor_id": "0x8086", 00:21:07.087 "model_number": "SPDK bdev Controller", 00:21:07.087 "serial_number": "00000000000000000000", 00:21:07.087 "firmware_revision": "24.09", 00:21:07.087 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:07.087 "oacs": { 00:21:07.087 "security": 0, 00:21:07.087 "format": 0, 00:21:07.087 "firmware": 0, 00:21:07.087 "ns_manage": 0 00:21:07.087 }, 00:21:07.087 "multi_ctrlr": true, 00:21:07.087 "ana_reporting": false 00:21:07.087 }, 00:21:07.087 "vs": { 00:21:07.087 "nvme_version": "1.3" 00:21:07.087 }, 00:21:07.087 "ns_data": { 00:21:07.087 "id": 1, 00:21:07.087 "can_share": true 00:21:07.087 } 00:21:07.087 } 00:21:07.087 ], 00:21:07.087 "mp_policy": "active_passive" 00:21:07.087 } 00:21:07.087 } 00:21:07.087 ] 00:21:07.087 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.087 21:42:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:07.087 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.087 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.087 [2024-07-15 21:42:57.784108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:07.087 [2024-07-15 21:42:57.784198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1663d30 (9): Bad file descriptor 00:21:07.345 [2024-07-15 21:42:57.916268] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.345 [ 00:21:07.345 { 00:21:07.345 "name": "nvme0n1", 00:21:07.345 "aliases": [ 00:21:07.345 "b3e70c5d-b099-4f6e-9ec9-5dd961a0948f" 00:21:07.345 ], 00:21:07.345 "product_name": "NVMe disk", 00:21:07.345 "block_size": 512, 00:21:07.345 "num_blocks": 2097152, 00:21:07.345 "uuid": "b3e70c5d-b099-4f6e-9ec9-5dd961a0948f", 00:21:07.345 "assigned_rate_limits": { 00:21:07.345 "rw_ios_per_sec": 0, 00:21:07.345 "rw_mbytes_per_sec": 0, 00:21:07.345 "r_mbytes_per_sec": 0, 00:21:07.345 "w_mbytes_per_sec": 0 00:21:07.345 }, 00:21:07.345 "claimed": false, 00:21:07.345 "zoned": false, 00:21:07.345 "supported_io_types": { 00:21:07.345 "read": true, 00:21:07.345 "write": true, 00:21:07.345 "unmap": false, 00:21:07.345 "flush": true, 00:21:07.345 "reset": true, 00:21:07.345 "nvme_admin": true, 00:21:07.345 "nvme_io": true, 00:21:07.345 "nvme_io_md": false, 00:21:07.345 "write_zeroes": true, 00:21:07.345 "zcopy": false, 00:21:07.345 "get_zone_info": false, 00:21:07.345 "zone_management": false, 00:21:07.345 "zone_append": false, 00:21:07.345 "compare": true, 00:21:07.345 "compare_and_write": true, 00:21:07.345 "abort": true, 00:21:07.345 "seek_hole": false, 00:21:07.345 "seek_data": false, 00:21:07.345 "copy": true, 00:21:07.345 "nvme_iov_md": false 00:21:07.345 }, 00:21:07.345 "memory_domains": [ 00:21:07.345 { 00:21:07.345 "dma_device_id": "system", 00:21:07.345 "dma_device_type": 1 00:21:07.345 } 00:21:07.345 ], 00:21:07.345 "driver_specific": { 00:21:07.345 "nvme": [ 00:21:07.345 { 00:21:07.345 "trid": { 00:21:07.345 "trtype": "TCP", 00:21:07.345 "adrfam": "IPv4", 00:21:07.345 "traddr": "10.0.0.2", 00:21:07.345 "trsvcid": "4420", 00:21:07.345 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:07.345 }, 00:21:07.345 "ctrlr_data": { 00:21:07.345 "cntlid": 2, 00:21:07.345 "vendor_id": "0x8086", 00:21:07.345 "model_number": "SPDK bdev Controller", 00:21:07.345 "serial_number": "00000000000000000000", 00:21:07.345 "firmware_revision": "24.09", 00:21:07.345 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:07.345 "oacs": { 00:21:07.345 "security": 0, 00:21:07.345 "format": 0, 00:21:07.345 "firmware": 0, 00:21:07.345 "ns_manage": 0 00:21:07.345 }, 00:21:07.345 "multi_ctrlr": true, 00:21:07.345 "ana_reporting": false 00:21:07.345 }, 00:21:07.345 "vs": { 00:21:07.345 "nvme_version": "1.3" 00:21:07.345 }, 00:21:07.345 "ns_data": { 00:21:07.345 "id": 1, 00:21:07.345 "can_share": true 00:21:07.345 } 00:21:07.345 } 00:21:07.345 ], 00:21:07.345 "mp_policy": "active_passive" 00:21:07.345 } 00:21:07.345 } 00:21:07.345 ] 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.NQdfvfmLRH 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.NQdfvfmLRH 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.345 [2024-07-15 21:42:57.968757] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:07.345 [2024-07-15 21:42:57.968857] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NQdfvfmLRH 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.345 [2024-07-15 21:42:57.976756] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NQdfvfmLRH 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.345 21:42:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.345 [2024-07-15 21:42:57.984800] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:07.345 [2024-07-15 21:42:57.984849] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:07.345 nvme0n1 00:21:07.345 21:42:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.345 21:42:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:07.345 21:42:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.345 21:42:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.345 [ 00:21:07.345 { 00:21:07.345 "name": "nvme0n1", 00:21:07.345 "aliases": [ 00:21:07.345 "b3e70c5d-b099-4f6e-9ec9-5dd961a0948f" 00:21:07.345 ], 00:21:07.345 "product_name": "NVMe disk", 00:21:07.345 "block_size": 512, 00:21:07.345 "num_blocks": 2097152, 00:21:07.345 "uuid": "b3e70c5d-b099-4f6e-9ec9-5dd961a0948f", 00:21:07.345 "assigned_rate_limits": { 00:21:07.345 "rw_ios_per_sec": 0, 00:21:07.345 "rw_mbytes_per_sec": 0, 00:21:07.345 "r_mbytes_per_sec": 0, 00:21:07.345 "w_mbytes_per_sec": 0 00:21:07.345 }, 00:21:07.345 "claimed": false, 00:21:07.345 "zoned": false, 00:21:07.345 "supported_io_types": { 00:21:07.345 "read": true, 00:21:07.345 "write": true, 00:21:07.345 "unmap": false, 00:21:07.345 "flush": true, 00:21:07.345 "reset": true, 00:21:07.345 "nvme_admin": true, 00:21:07.345 "nvme_io": true, 00:21:07.345 "nvme_io_md": false, 00:21:07.345 "write_zeroes": true, 00:21:07.345 "zcopy": false, 00:21:07.345 "get_zone_info": false, 00:21:07.345 "zone_management": false, 00:21:07.345 "zone_append": false, 00:21:07.345 "compare": true, 00:21:07.345 "compare_and_write": true, 00:21:07.345 "abort": true, 00:21:07.345 "seek_hole": false, 00:21:07.345 "seek_data": false, 00:21:07.345 "copy": true, 00:21:07.345 "nvme_iov_md": false 00:21:07.345 }, 00:21:07.345 "memory_domains": [ 00:21:07.345 { 00:21:07.345 "dma_device_id": "system", 00:21:07.345 "dma_device_type": 1 00:21:07.345 } 00:21:07.345 ], 00:21:07.345 "driver_specific": { 00:21:07.345 "nvme": [ 00:21:07.345 { 00:21:07.345 "trid": { 00:21:07.345 "trtype": "TCP", 00:21:07.345 "adrfam": "IPv4", 00:21:07.345 "traddr": "10.0.0.2", 00:21:07.345 "trsvcid": "4421", 00:21:07.345 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:07.345 }, 00:21:07.345 "ctrlr_data": { 00:21:07.345 "cntlid": 3, 00:21:07.345 "vendor_id": "0x8086", 00:21:07.345 "model_number": "SPDK bdev Controller", 00:21:07.345 "serial_number": "00000000000000000000", 00:21:07.345 "firmware_revision": "24.09", 00:21:07.345 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:07.345 "oacs": { 00:21:07.345 "security": 0, 00:21:07.345 "format": 0, 00:21:07.345 "firmware": 0, 00:21:07.345 "ns_manage": 0 00:21:07.345 }, 00:21:07.345 "multi_ctrlr": true, 00:21:07.345 "ana_reporting": false 00:21:07.345 }, 00:21:07.345 "vs": { 00:21:07.345 "nvme_version": "1.3" 00:21:07.345 }, 00:21:07.345 "ns_data": { 00:21:07.345 "id": 1, 00:21:07.345 "can_share": true 00:21:07.345 } 00:21:07.345 } 00:21:07.345 ], 00:21:07.345 "mp_policy": "active_passive" 00:21:07.345 } 00:21:07.345 } 00:21:07.345 ] 00:21:07.346 21:42:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.346 21:42:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.346 21:42:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.346 21:42:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.346 21:42:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.346 21:42:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.NQdfvfmLRH 00:21:07.346 21:42:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:07.346 21:42:58 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:21:07.346 21:42:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:07.399 21:42:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:21:07.399 21:42:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:07.399 21:42:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:21:07.399 21:42:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:07.399 21:42:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:07.399 rmmod nvme_tcp 00:21:07.399 rmmod nvme_fabrics 00:21:07.399 rmmod nvme_keyring 00:21:07.399 21:42:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:07.399 21:42:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:21:07.399 21:42:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:21:07.399 21:42:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 396157 ']' 00:21:07.399 21:42:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 396157 00:21:07.399 21:42:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 396157 ']' 00:21:07.399 21:42:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 396157 00:21:07.399 21:42:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:21:07.657 21:42:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:07.657 21:42:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 396157 00:21:07.657 21:42:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:07.657 21:42:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:07.657 21:42:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 396157' 00:21:07.657 killing process with pid 396157 00:21:07.657 21:42:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 396157 00:21:07.657 [2024-07-15 21:42:58.157478] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:07.657 [2024-07-15 21:42:58.157510] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:07.657 21:42:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 396157 00:21:07.657 21:42:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:07.657 21:42:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:07.657 21:42:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:07.657 21:42:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:07.657 21:42:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:07.657 21:42:58 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.657 21:42:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:07.657 21:42:58 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.190 21:43:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:10.190 00:21:10.190 real 0m5.178s 00:21:10.190 user 0m1.993s 00:21:10.190 sys 0m1.595s 00:21:10.190 21:43:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:10.190 21:43:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:10.190 ************************************ 00:21:10.190 END TEST nvmf_async_init 00:21:10.190 ************************************ 00:21:10.190 21:43:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:10.190 21:43:00 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:10.190 21:43:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:10.190 21:43:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:10.190 21:43:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:10.190 ************************************ 00:21:10.190 START TEST dma 00:21:10.190 ************************************ 00:21:10.190 21:43:00 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:10.190 * Looking for test storage... 00:21:10.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:10.190 21:43:00 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.190 21:43:00 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.190 21:43:00 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.190 21:43:00 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.190 21:43:00 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.190 21:43:00 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.190 21:43:00 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.190 21:43:00 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:21:10.190 21:43:00 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:10.190 21:43:00 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:10.190 21:43:00 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:10.190 21:43:00 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:21:10.190 00:21:10.190 real 0m0.070s 00:21:10.190 user 0m0.041s 00:21:10.190 sys 0m0.034s 00:21:10.190 21:43:00 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:10.190 21:43:00 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:21:10.190 ************************************ 00:21:10.190 END TEST dma 00:21:10.190 ************************************ 00:21:10.190 21:43:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:10.190 21:43:00 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:10.190 21:43:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:10.190 21:43:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:10.190 21:43:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:10.190 ************************************ 00:21:10.190 START TEST nvmf_identify 00:21:10.190 ************************************ 00:21:10.190 21:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:10.190 * Looking for test storage... 00:21:10.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:10.190 21:43:00 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.190 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:10.190 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.190 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.190 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.190 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.190 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.190 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.190 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.190 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.190 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.190 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.190 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:10.190 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:21:10.190 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.190 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.190 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:21:10.191 21:43:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:21:11.566 Found 0000:08:00.0 (0x8086 - 0x159b) 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:21:11.566 Found 0000:08:00.1 (0x8086 - 0x159b) 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:21:11.566 Found net devices under 0000:08:00.0: cvl_0_0 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:21:11.566 Found net devices under 0000:08:00.1: cvl_0_1 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.566 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:11.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:21:11.824 00:21:11.824 --- 10.0.0.2 ping statistics --- 00:21:11.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.824 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:11.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:21:11.824 00:21:11.824 --- 10.0.0.1 ping statistics --- 00:21:11.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.824 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=397811 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 397811 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 397811 ']' 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:11.824 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:11.824 [2024-07-15 21:43:02.495090] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:21:11.824 [2024-07-15 21:43:02.495205] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.824 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.824 [2024-07-15 21:43:02.562955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:12.081 [2024-07-15 21:43:02.684863] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.081 [2024-07-15 21:43:02.684922] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.081 [2024-07-15 21:43:02.684938] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.081 [2024-07-15 21:43:02.684951] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.081 [2024-07-15 21:43:02.684964] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.081 [2024-07-15 21:43:02.685031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.081 [2024-07-15 21:43:02.685095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.081 [2024-07-15 21:43:02.685155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:12.081 [2024-07-15 21:43:02.685159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.081 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:12.081 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:21:12.081 21:43:02 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:12.081 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.081 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.081 [2024-07-15 21:43:02.813768] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.081 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.081 21:43:02 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:12.081 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:12.081 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.081 21:43:02 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:12.081 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.081 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.081 Malloc0 00:21:12.081 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.081 21:43:02 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:12.081 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.081 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.081 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.082 21:43:02 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:12.082 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.082 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.340 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.340 21:43:02 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:12.340 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.340 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.340 [2024-07-15 21:43:02.883693] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.340 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.340 21:43:02 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:12.340 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.340 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.340 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.340 21:43:02 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:12.340 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.340 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.340 [ 00:21:12.340 { 00:21:12.340 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:12.340 "subtype": "Discovery", 00:21:12.340 "listen_addresses": [ 00:21:12.340 { 00:21:12.340 "trtype": "TCP", 00:21:12.340 "adrfam": "IPv4", 00:21:12.340 "traddr": "10.0.0.2", 00:21:12.340 "trsvcid": "4420" 00:21:12.340 } 00:21:12.340 ], 00:21:12.340 "allow_any_host": true, 00:21:12.340 "hosts": [] 00:21:12.340 }, 00:21:12.340 { 00:21:12.340 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.340 "subtype": "NVMe", 00:21:12.340 "listen_addresses": [ 00:21:12.340 { 00:21:12.340 "trtype": "TCP", 00:21:12.340 "adrfam": "IPv4", 00:21:12.340 "traddr": "10.0.0.2", 00:21:12.340 "trsvcid": "4420" 00:21:12.340 } 00:21:12.340 ], 00:21:12.340 "allow_any_host": true, 00:21:12.340 "hosts": [], 00:21:12.340 "serial_number": "SPDK00000000000001", 00:21:12.340 "model_number": "SPDK bdev Controller", 00:21:12.340 "max_namespaces": 32, 00:21:12.340 "min_cntlid": 1, 00:21:12.340 "max_cntlid": 65519, 00:21:12.340 "namespaces": [ 00:21:12.340 { 00:21:12.340 "nsid": 1, 00:21:12.340 "bdev_name": "Malloc0", 00:21:12.340 "name": "Malloc0", 00:21:12.340 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:12.340 "eui64": "ABCDEF0123456789", 00:21:12.340 "uuid": "700b7d76-fff1-4c8c-940a-43bad639aa0f" 00:21:12.340 } 00:21:12.340 ] 00:21:12.340 } 00:21:12.340 ] 00:21:12.340 21:43:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.341 21:43:02 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:12.341 [2024-07-15 21:43:02.926022] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:21:12.341 [2024-07-15 21:43:02.926070] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid397924 ] 00:21:12.341 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.341 [2024-07-15 21:43:02.968095] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:12.341 [2024-07-15 21:43:02.968181] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:12.341 [2024-07-15 21:43:02.968194] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:12.341 [2024-07-15 21:43:02.968213] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:12.341 [2024-07-15 21:43:02.968225] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:12.341 [2024-07-15 21:43:02.968479] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:12.341 [2024-07-15 21:43:02.968543] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x7cb400 0 00:21:12.341 [2024-07-15 21:43:02.983160] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:12.341 [2024-07-15 21:43:02.983184] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:12.341 [2024-07-15 21:43:02.983194] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:12.341 [2024-07-15 21:43:02.983201] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:12.341 [2024-07-15 21:43:02.983269] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.341 [2024-07-15 21:43:02.983284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.341 [2024-07-15 21:43:02.983293] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7cb400) 00:21:12.341 [2024-07-15 21:43:02.983313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:12.341 [2024-07-15 21:43:02.983343] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b3c0, cid 0, qid 0 00:21:12.341 [2024-07-15 21:43:02.988164] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.341 [2024-07-15 21:43:02.988183] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.341 [2024-07-15 21:43:02.988191] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.341 [2024-07-15 21:43:02.988200] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b3c0) on tqpair=0x7cb400 00:21:12.341 [2024-07-15 21:43:02.988234] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:12.341 [2024-07-15 21:43:02.988247] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:12.341 [2024-07-15 21:43:02.988257] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:12.341 [2024-07-15 21:43:02.988282] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.341 [2024-07-15 21:43:02.988291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.341 [2024-07-15 21:43:02.988299] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7cb400) 00:21:12.341 [2024-07-15 21:43:02.988312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.341 [2024-07-15 21:43:02.988341] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b3c0, cid 0, qid 0 00:21:12.341 [2024-07-15 21:43:02.988472] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.341 [2024-07-15 21:43:02.988485] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.341 [2024-07-15 21:43:02.988493] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.341 [2024-07-15 21:43:02.988501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b3c0) on tqpair=0x7cb400 00:21:12.341 [2024-07-15 21:43:02.988511] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:12.341 [2024-07-15 21:43:02.988529] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:12.341 [2024-07-15 21:43:02.988543] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.341 [2024-07-15 21:43:02.988551] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.341 [2024-07-15 21:43:02.988559] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7cb400) 00:21:12.341 [2024-07-15 21:43:02.988571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.341 [2024-07-15 21:43:02.988595] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b3c0, cid 0, qid 0 00:21:12.341 [2024-07-15 21:43:02.988709] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.341 [2024-07-15 21:43:02.988722] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.341 [2024-07-15 21:43:02.988734] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.341 [2024-07-15 21:43:02.988742] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b3c0) on tqpair=0x7cb400 00:21:12.341 [2024-07-15 21:43:02.988752] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:12.341 [2024-07-15 21:43:02.988772] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:12.341 [2024-07-15 21:43:02.988785] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.341 [2024-07-15 21:43:02.988794] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.341 [2024-07-15 21:43:02.988801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7cb400) 00:21:12.341 [2024-07-15 21:43:02.988813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.341 [2024-07-15 21:43:02.988838] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b3c0, cid 0, qid 0 00:21:12.341 [2024-07-15 21:43:02.988966] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.341 [2024-07-15 21:43:02.988980] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.341 [2024-07-15 21:43:02.988988] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.341 [2024-07-15 21:43:02.988996] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b3c0) on tqpair=0x7cb400 00:21:12.341 [2024-07-15 21:43:02.989007] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:12.341 [2024-07-15 21:43:02.989029] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.341 [2024-07-15 21:43:02.989039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.341 [2024-07-15 21:43:02.989046] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7cb400) 00:21:12.341 [2024-07-15 21:43:02.989058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.341 [2024-07-15 21:43:02.989083] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b3c0, cid 0, qid 0 00:21:12.341 [2024-07-15 21:43:02.989213] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.341 [2024-07-15 21:43:02.989229] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.341 [2024-07-15 21:43:02.989236] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.341 [2024-07-15 21:43:02.989244] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b3c0) on tqpair=0x7cb400 00:21:12.341 [2024-07-15 21:43:02.989255] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:12.341 [2024-07-15 21:43:02.989265] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:12.341 [2024-07-15 21:43:02.989288] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:12.341 [2024-07-15 21:43:02.989399] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:12.341 [2024-07-15 21:43:02.989409] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:12.341 [2024-07-15 21:43:02.989426] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.341 [2024-07-15 21:43:02.989434] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.341 [2024-07-15 21:43:02.989441] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7cb400) 00:21:12.341 [2024-07-15 21:43:02.989453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.341 [2024-07-15 21:43:02.989476] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b3c0, cid 0, qid 0 00:21:12.341 [2024-07-15 21:43:02.989602] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.341 [2024-07-15 21:43:02.989615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.341 [2024-07-15 21:43:02.989623] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.341 [2024-07-15 21:43:02.989631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b3c0) on tqpair=0x7cb400 00:21:12.341 [2024-07-15 21:43:02.989641] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:12.341 [2024-07-15 21:43:02.989658] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.341 [2024-07-15 21:43:02.989668] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.341 [2024-07-15 21:43:02.989675] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7cb400) 00:21:12.341 [2024-07-15 21:43:02.989687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.341 [2024-07-15 21:43:02.989709] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b3c0, cid 0, qid 0 00:21:12.341 [2024-07-15 21:43:02.989822] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.341 [2024-07-15 21:43:02.989835] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.341 [2024-07-15 21:43:02.989843] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.341 [2024-07-15 21:43:02.989851] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b3c0) on tqpair=0x7cb400 00:21:12.341 [2024-07-15 21:43:02.989860] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:12.341 [2024-07-15 21:43:02.989870] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:12.341 [2024-07-15 21:43:02.989885] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:12.341 [2024-07-15 21:43:02.989907] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:12.341 [2024-07-15 21:43:02.989926] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.341 [2024-07-15 21:43:02.989934] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7cb400) 00:21:12.341 [2024-07-15 21:43:02.989947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.341 [2024-07-15 21:43:02.989969] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b3c0, cid 0, qid 0 00:21:12.341 [2024-07-15 21:43:02.990116] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:12.341 [2024-07-15 21:43:02.990129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:12.341 [2024-07-15 21:43:02.994149] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:02.994172] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7cb400): datao=0, datal=4096, cccid=0 00:21:12.342 [2024-07-15 21:43:02.994182] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x82b3c0) on tqpair(0x7cb400): expected_datao=0, payload_size=4096 00:21:12.342 [2024-07-15 21:43:02.994192] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:02.994214] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:02.994225] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.030277] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.342 [2024-07-15 21:43:03.030297] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.342 [2024-07-15 21:43:03.030306] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.030314] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b3c0) on tqpair=0x7cb400 00:21:12.342 [2024-07-15 21:43:03.030333] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:12.342 [2024-07-15 21:43:03.030349] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:12.342 [2024-07-15 21:43:03.030359] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:12.342 [2024-07-15 21:43:03.030369] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:12.342 [2024-07-15 21:43:03.030379] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:12.342 [2024-07-15 21:43:03.030388] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:12.342 [2024-07-15 21:43:03.030406] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:12.342 [2024-07-15 21:43:03.030421] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.030429] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.030437] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7cb400) 00:21:12.342 [2024-07-15 21:43:03.030450] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:12.342 [2024-07-15 21:43:03.030474] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b3c0, cid 0, qid 0 00:21:12.342 [2024-07-15 21:43:03.030594] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.342 [2024-07-15 21:43:03.030607] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.342 [2024-07-15 21:43:03.030615] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.030623] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b3c0) on tqpair=0x7cb400 00:21:12.342 [2024-07-15 21:43:03.030639] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.030647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.030655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7cb400) 00:21:12.342 [2024-07-15 21:43:03.030666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.342 [2024-07-15 21:43:03.030677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.030685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.030693] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x7cb400) 00:21:12.342 [2024-07-15 21:43:03.030703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.342 [2024-07-15 21:43:03.030714] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.030722] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.030729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x7cb400) 00:21:12.342 [2024-07-15 21:43:03.030739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.342 [2024-07-15 21:43:03.030750] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.030758] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.030765] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7cb400) 00:21:12.342 [2024-07-15 21:43:03.030775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.342 [2024-07-15 21:43:03.030785] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:12.342 [2024-07-15 21:43:03.030811] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:12.342 [2024-07-15 21:43:03.030825] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.030834] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7cb400) 00:21:12.342 [2024-07-15 21:43:03.030845] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.342 [2024-07-15 21:43:03.030870] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b3c0, cid 0, qid 0 00:21:12.342 [2024-07-15 21:43:03.030882] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b540, cid 1, qid 0 00:21:12.342 [2024-07-15 21:43:03.030891] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b6c0, cid 2, qid 0 00:21:12.342 [2024-07-15 21:43:03.030900] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b840, cid 3, qid 0 00:21:12.342 [2024-07-15 21:43:03.030909] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b9c0, cid 4, qid 0 00:21:12.342 [2024-07-15 21:43:03.031059] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.342 [2024-07-15 21:43:03.031071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.342 [2024-07-15 21:43:03.031079] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.031087] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b9c0) on tqpair=0x7cb400 00:21:12.342 [2024-07-15 21:43:03.031100] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:12.342 [2024-07-15 21:43:03.031111] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:12.342 [2024-07-15 21:43:03.031130] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.031148] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7cb400) 00:21:12.342 [2024-07-15 21:43:03.031161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.342 [2024-07-15 21:43:03.031184] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b9c0, cid 4, qid 0 00:21:12.342 [2024-07-15 21:43:03.031323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:12.342 [2024-07-15 21:43:03.031336] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:12.342 [2024-07-15 21:43:03.031344] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.031351] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7cb400): datao=0, datal=4096, cccid=4 00:21:12.342 [2024-07-15 21:43:03.031360] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x82b9c0) on tqpair(0x7cb400): expected_datao=0, payload_size=4096 00:21:12.342 [2024-07-15 21:43:03.031369] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.031386] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.031395] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.031408] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.342 [2024-07-15 21:43:03.031419] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.342 [2024-07-15 21:43:03.031427] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.031435] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b9c0) on tqpair=0x7cb400 00:21:12.342 [2024-07-15 21:43:03.031456] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:12.342 [2024-07-15 21:43:03.031499] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.031510] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7cb400) 00:21:12.342 [2024-07-15 21:43:03.031526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.342 [2024-07-15 21:43:03.031540] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.031549] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.031556] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7cb400) 00:21:12.342 [2024-07-15 21:43:03.031566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.342 [2024-07-15 21:43:03.031594] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b9c0, cid 4, qid 0 00:21:12.342 [2024-07-15 21:43:03.031607] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82bb40, cid 5, qid 0 00:21:12.342 [2024-07-15 21:43:03.031765] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:12.342 [2024-07-15 21:43:03.031780] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:12.342 [2024-07-15 21:43:03.031788] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.031796] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7cb400): datao=0, datal=1024, cccid=4 00:21:12.342 [2024-07-15 21:43:03.031804] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x82b9c0) on tqpair(0x7cb400): expected_datao=0, payload_size=1024 00:21:12.342 [2024-07-15 21:43:03.031813] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.031824] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.031832] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.031842] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.342 [2024-07-15 21:43:03.031853] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.342 [2024-07-15 21:43:03.031860] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.031868] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82bb40) on tqpair=0x7cb400 00:21:12.342 [2024-07-15 21:43:03.076157] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.342 [2024-07-15 21:43:03.076176] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.342 [2024-07-15 21:43:03.076184] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.076193] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b9c0) on tqpair=0x7cb400 00:21:12.342 [2024-07-15 21:43:03.076225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.076235] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7cb400) 00:21:12.342 [2024-07-15 21:43:03.076248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.342 [2024-07-15 21:43:03.076281] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b9c0, cid 4, qid 0 00:21:12.342 [2024-07-15 21:43:03.076430] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:12.342 [2024-07-15 21:43:03.076443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:12.342 [2024-07-15 21:43:03.076451] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:12.342 [2024-07-15 21:43:03.076459] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7cb400): datao=0, datal=3072, cccid=4 00:21:12.342 [2024-07-15 21:43:03.076468] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x82b9c0) on tqpair(0x7cb400): expected_datao=0, payload_size=3072 00:21:12.342 [2024-07-15 21:43:03.076477] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.343 [2024-07-15 21:43:03.076498] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:12.343 [2024-07-15 21:43:03.076508] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:12.343 [2024-07-15 21:43:03.117251] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.343 [2024-07-15 21:43:03.117271] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.343 [2024-07-15 21:43:03.117278] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.343 [2024-07-15 21:43:03.117285] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b9c0) on tqpair=0x7cb400 00:21:12.343 [2024-07-15 21:43:03.117314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.343 [2024-07-15 21:43:03.117322] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7cb400) 00:21:12.343 [2024-07-15 21:43:03.117333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.343 [2024-07-15 21:43:03.117372] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b9c0, cid 4, qid 0 00:21:12.343 [2024-07-15 21:43:03.117535] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:12.343 [2024-07-15 21:43:03.117546] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:12.343 [2024-07-15 21:43:03.117553] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:12.343 [2024-07-15 21:43:03.117559] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7cb400): datao=0, datal=8, cccid=4 00:21:12.343 [2024-07-15 21:43:03.117567] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x82b9c0) on tqpair(0x7cb400): expected_datao=0, payload_size=8 00:21:12.343 [2024-07-15 21:43:03.117574] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.343 [2024-07-15 21:43:03.117584] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:12.343 [2024-07-15 21:43:03.117590] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:12.622 [2024-07-15 21:43:03.158242] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.622 [2024-07-15 21:43:03.158258] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.622 [2024-07-15 21:43:03.158265] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.622 [2024-07-15 21:43:03.158272] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b9c0) on tqpair=0x7cb400 00:21:12.622 ===================================================== 00:21:12.622 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:12.622 ===================================================== 00:21:12.622 Controller Capabilities/Features 00:21:12.622 ================================ 00:21:12.622 Vendor ID: 0000 00:21:12.622 Subsystem Vendor ID: 0000 00:21:12.622 Serial Number: .................... 00:21:12.622 Model Number: ........................................ 00:21:12.622 Firmware Version: 24.09 00:21:12.622 Recommended Arb Burst: 0 00:21:12.622 IEEE OUI Identifier: 00 00 00 00:21:12.622 Multi-path I/O 00:21:12.622 May have multiple subsystem ports: No 00:21:12.622 May have multiple controllers: No 00:21:12.622 Associated with SR-IOV VF: No 00:21:12.622 Max Data Transfer Size: 131072 00:21:12.622 Max Number of Namespaces: 0 00:21:12.622 Max Number of I/O Queues: 1024 00:21:12.622 NVMe Specification Version (VS): 1.3 00:21:12.622 NVMe Specification Version (Identify): 1.3 00:21:12.622 Maximum Queue Entries: 128 00:21:12.622 Contiguous Queues Required: Yes 00:21:12.622 Arbitration Mechanisms Supported 00:21:12.622 Weighted Round Robin: Not Supported 00:21:12.622 Vendor Specific: Not Supported 00:21:12.622 Reset Timeout: 15000 ms 00:21:12.622 Doorbell Stride: 4 bytes 00:21:12.622 NVM Subsystem Reset: Not Supported 00:21:12.622 Command Sets Supported 00:21:12.622 NVM Command Set: Supported 00:21:12.622 Boot Partition: Not Supported 00:21:12.622 Memory Page Size Minimum: 4096 bytes 00:21:12.622 Memory Page Size Maximum: 4096 bytes 00:21:12.622 Persistent Memory Region: Not Supported 00:21:12.622 Optional Asynchronous Events Supported 00:21:12.622 Namespace Attribute Notices: Not Supported 00:21:12.622 Firmware Activation Notices: Not Supported 00:21:12.622 ANA Change Notices: Not Supported 00:21:12.622 PLE Aggregate Log Change Notices: Not Supported 00:21:12.622 LBA Status Info Alert Notices: Not Supported 00:21:12.622 EGE Aggregate Log Change Notices: Not Supported 00:21:12.622 Normal NVM Subsystem Shutdown event: Not Supported 00:21:12.622 Zone Descriptor Change Notices: Not Supported 00:21:12.622 Discovery Log Change Notices: Supported 00:21:12.622 Controller Attributes 00:21:12.622 128-bit Host Identifier: Not Supported 00:21:12.622 Non-Operational Permissive Mode: Not Supported 00:21:12.622 NVM Sets: Not Supported 00:21:12.622 Read Recovery Levels: Not Supported 00:21:12.622 Endurance Groups: Not Supported 00:21:12.622 Predictable Latency Mode: Not Supported 00:21:12.622 Traffic Based Keep ALive: Not Supported 00:21:12.622 Namespace Granularity: Not Supported 00:21:12.622 SQ Associations: Not Supported 00:21:12.622 UUID List: Not Supported 00:21:12.622 Multi-Domain Subsystem: Not Supported 00:21:12.622 Fixed Capacity Management: Not Supported 00:21:12.622 Variable Capacity Management: Not Supported 00:21:12.622 Delete Endurance Group: Not Supported 00:21:12.622 Delete NVM Set: Not Supported 00:21:12.622 Extended LBA Formats Supported: Not Supported 00:21:12.622 Flexible Data Placement Supported: Not Supported 00:21:12.622 00:21:12.622 Controller Memory Buffer Support 00:21:12.622 ================================ 00:21:12.622 Supported: No 00:21:12.622 00:21:12.622 Persistent Memory Region Support 00:21:12.622 ================================ 00:21:12.622 Supported: No 00:21:12.622 00:21:12.622 Admin Command Set Attributes 00:21:12.622 ============================ 00:21:12.622 Security Send/Receive: Not Supported 00:21:12.622 Format NVM: Not Supported 00:21:12.622 Firmware Activate/Download: Not Supported 00:21:12.622 Namespace Management: Not Supported 00:21:12.622 Device Self-Test: Not Supported 00:21:12.622 Directives: Not Supported 00:21:12.622 NVMe-MI: Not Supported 00:21:12.622 Virtualization Management: Not Supported 00:21:12.622 Doorbell Buffer Config: Not Supported 00:21:12.622 Get LBA Status Capability: Not Supported 00:21:12.622 Command & Feature Lockdown Capability: Not Supported 00:21:12.622 Abort Command Limit: 1 00:21:12.622 Async Event Request Limit: 4 00:21:12.622 Number of Firmware Slots: N/A 00:21:12.622 Firmware Slot 1 Read-Only: N/A 00:21:12.622 Firmware Activation Without Reset: N/A 00:21:12.622 Multiple Update Detection Support: N/A 00:21:12.622 Firmware Update Granularity: No Information Provided 00:21:12.622 Per-Namespace SMART Log: No 00:21:12.622 Asymmetric Namespace Access Log Page: Not Supported 00:21:12.622 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:12.622 Command Effects Log Page: Not Supported 00:21:12.622 Get Log Page Extended Data: Supported 00:21:12.622 Telemetry Log Pages: Not Supported 00:21:12.622 Persistent Event Log Pages: Not Supported 00:21:12.622 Supported Log Pages Log Page: May Support 00:21:12.622 Commands Supported & Effects Log Page: Not Supported 00:21:12.622 Feature Identifiers & Effects Log Page:May Support 00:21:12.622 NVMe-MI Commands & Effects Log Page: May Support 00:21:12.622 Data Area 4 for Telemetry Log: Not Supported 00:21:12.622 Error Log Page Entries Supported: 128 00:21:12.622 Keep Alive: Not Supported 00:21:12.622 00:21:12.622 NVM Command Set Attributes 00:21:12.622 ========================== 00:21:12.622 Submission Queue Entry Size 00:21:12.622 Max: 1 00:21:12.622 Min: 1 00:21:12.622 Completion Queue Entry Size 00:21:12.622 Max: 1 00:21:12.622 Min: 1 00:21:12.622 Number of Namespaces: 0 00:21:12.622 Compare Command: Not Supported 00:21:12.622 Write Uncorrectable Command: Not Supported 00:21:12.622 Dataset Management Command: Not Supported 00:21:12.622 Write Zeroes Command: Not Supported 00:21:12.622 Set Features Save Field: Not Supported 00:21:12.622 Reservations: Not Supported 00:21:12.622 Timestamp: Not Supported 00:21:12.622 Copy: Not Supported 00:21:12.622 Volatile Write Cache: Not Present 00:21:12.622 Atomic Write Unit (Normal): 1 00:21:12.622 Atomic Write Unit (PFail): 1 00:21:12.622 Atomic Compare & Write Unit: 1 00:21:12.622 Fused Compare & Write: Supported 00:21:12.622 Scatter-Gather List 00:21:12.622 SGL Command Set: Supported 00:21:12.622 SGL Keyed: Supported 00:21:12.622 SGL Bit Bucket Descriptor: Not Supported 00:21:12.622 SGL Metadata Pointer: Not Supported 00:21:12.622 Oversized SGL: Not Supported 00:21:12.622 SGL Metadata Address: Not Supported 00:21:12.622 SGL Offset: Supported 00:21:12.622 Transport SGL Data Block: Not Supported 00:21:12.622 Replay Protected Memory Block: Not Supported 00:21:12.622 00:21:12.622 Firmware Slot Information 00:21:12.622 ========================= 00:21:12.622 Active slot: 0 00:21:12.622 00:21:12.622 00:21:12.622 Error Log 00:21:12.622 ========= 00:21:12.622 00:21:12.622 Active Namespaces 00:21:12.622 ================= 00:21:12.622 Discovery Log Page 00:21:12.622 ================== 00:21:12.622 Generation Counter: 2 00:21:12.622 Number of Records: 2 00:21:12.622 Record Format: 0 00:21:12.622 00:21:12.622 Discovery Log Entry 0 00:21:12.622 ---------------------- 00:21:12.622 Transport Type: 3 (TCP) 00:21:12.622 Address Family: 1 (IPv4) 00:21:12.622 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:12.622 Entry Flags: 00:21:12.622 Duplicate Returned Information: 1 00:21:12.622 Explicit Persistent Connection Support for Discovery: 1 00:21:12.622 Transport Requirements: 00:21:12.622 Secure Channel: Not Required 00:21:12.622 Port ID: 0 (0x0000) 00:21:12.622 Controller ID: 65535 (0xffff) 00:21:12.622 Admin Max SQ Size: 128 00:21:12.622 Transport Service Identifier: 4420 00:21:12.622 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:12.622 Transport Address: 10.0.0.2 00:21:12.622 Discovery Log Entry 1 00:21:12.622 ---------------------- 00:21:12.622 Transport Type: 3 (TCP) 00:21:12.622 Address Family: 1 (IPv4) 00:21:12.622 Subsystem Type: 2 (NVM Subsystem) 00:21:12.622 Entry Flags: 00:21:12.622 Duplicate Returned Information: 0 00:21:12.622 Explicit Persistent Connection Support for Discovery: 0 00:21:12.622 Transport Requirements: 00:21:12.622 Secure Channel: Not Required 00:21:12.622 Port ID: 0 (0x0000) 00:21:12.622 Controller ID: 65535 (0xffff) 00:21:12.622 Admin Max SQ Size: 128 00:21:12.622 Transport Service Identifier: 4420 00:21:12.622 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:12.622 Transport Address: 10.0.0.2 [2024-07-15 21:43:03.158415] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:12.622 [2024-07-15 21:43:03.158436] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b3c0) on tqpair=0x7cb400 00:21:12.622 [2024-07-15 21:43:03.158448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.622 [2024-07-15 21:43:03.158457] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b540) on tqpair=0x7cb400 00:21:12.622 [2024-07-15 21:43:03.158465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.622 [2024-07-15 21:43:03.158473] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b6c0) on tqpair=0x7cb400 00:21:12.622 [2024-07-15 21:43:03.158480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.622 [2024-07-15 21:43:03.158488] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b840) on tqpair=0x7cb400 00:21:12.623 [2024-07-15 21:43:03.158496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.623 [2024-07-15 21:43:03.158515] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.158523] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.158530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7cb400) 00:21:12.623 [2024-07-15 21:43:03.158540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.623 [2024-07-15 21:43:03.158564] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b840, cid 3, qid 0 00:21:12.623 [2024-07-15 21:43:03.158684] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.623 [2024-07-15 21:43:03.158700] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.623 [2024-07-15 21:43:03.158707] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.158714] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b840) on tqpair=0x7cb400 00:21:12.623 [2024-07-15 21:43:03.158726] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.158733] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.158740] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7cb400) 00:21:12.623 [2024-07-15 21:43:03.158750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.623 [2024-07-15 21:43:03.158774] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b840, cid 3, qid 0 00:21:12.623 [2024-07-15 21:43:03.158914] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.623 [2024-07-15 21:43:03.158926] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.623 [2024-07-15 21:43:03.158933] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.158940] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b840) on tqpair=0x7cb400 00:21:12.623 [2024-07-15 21:43:03.158949] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:12.623 [2024-07-15 21:43:03.158958] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:12.623 [2024-07-15 21:43:03.158973] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.158981] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.158988] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7cb400) 00:21:12.623 [2024-07-15 21:43:03.158998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.623 [2024-07-15 21:43:03.159016] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b840, cid 3, qid 0 00:21:12.623 [2024-07-15 21:43:03.159144] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.623 [2024-07-15 21:43:03.159157] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.623 [2024-07-15 21:43:03.159164] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.159170] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b840) on tqpair=0x7cb400 00:21:12.623 [2024-07-15 21:43:03.159187] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.159196] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.159202] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7cb400) 00:21:12.623 [2024-07-15 21:43:03.159212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.623 [2024-07-15 21:43:03.159230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b840, cid 3, qid 0 00:21:12.623 [2024-07-15 21:43:03.159353] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.623 [2024-07-15 21:43:03.159365] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.623 [2024-07-15 21:43:03.159372] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.159378] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b840) on tqpair=0x7cb400 00:21:12.623 [2024-07-15 21:43:03.159394] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.159402] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.159409] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7cb400) 00:21:12.623 [2024-07-15 21:43:03.159419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.623 [2024-07-15 21:43:03.159440] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b840, cid 3, qid 0 00:21:12.623 [2024-07-15 21:43:03.159556] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.623 [2024-07-15 21:43:03.159567] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.623 [2024-07-15 21:43:03.159573] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.159580] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b840) on tqpair=0x7cb400 00:21:12.623 [2024-07-15 21:43:03.159595] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.159604] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.159610] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7cb400) 00:21:12.623 [2024-07-15 21:43:03.159620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.623 [2024-07-15 21:43:03.159638] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b840, cid 3, qid 0 00:21:12.623 [2024-07-15 21:43:03.159759] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.623 [2024-07-15 21:43:03.159770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.623 [2024-07-15 21:43:03.159777] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.159784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b840) on tqpair=0x7cb400 00:21:12.623 [2024-07-15 21:43:03.159799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.159807] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.159813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7cb400) 00:21:12.623 [2024-07-15 21:43:03.159823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.623 [2024-07-15 21:43:03.159841] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b840, cid 3, qid 0 00:21:12.623 [2024-07-15 21:43:03.159960] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.623 [2024-07-15 21:43:03.159971] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.623 [2024-07-15 21:43:03.159977] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.159984] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b840) on tqpair=0x7cb400 00:21:12.623 [2024-07-15 21:43:03.159999] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.160008] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.160014] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7cb400) 00:21:12.623 [2024-07-15 21:43:03.160024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.623 [2024-07-15 21:43:03.160042] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b840, cid 3, qid 0 00:21:12.623 [2024-07-15 21:43:03.164153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.623 [2024-07-15 21:43:03.164169] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.623 [2024-07-15 21:43:03.164176] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.164183] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b840) on tqpair=0x7cb400 00:21:12.623 [2024-07-15 21:43:03.164200] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.164209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.164215] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7cb400) 00:21:12.623 [2024-07-15 21:43:03.164226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.623 [2024-07-15 21:43:03.164246] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x82b840, cid 3, qid 0 00:21:12.623 [2024-07-15 21:43:03.164343] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.623 [2024-07-15 21:43:03.164359] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.623 [2024-07-15 21:43:03.164366] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.164373] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x82b840) on tqpair=0x7cb400 00:21:12.623 [2024-07-15 21:43:03.164386] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:21:12.623 00:21:12.623 21:43:03 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:12.623 [2024-07-15 21:43:03.199965] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:21:12.623 [2024-07-15 21:43:03.200012] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid397926 ] 00:21:12.623 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.623 [2024-07-15 21:43:03.234794] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:12.623 [2024-07-15 21:43:03.234851] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:12.623 [2024-07-15 21:43:03.234861] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:12.623 [2024-07-15 21:43:03.234876] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:12.623 [2024-07-15 21:43:03.234885] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:12.623 [2024-07-15 21:43:03.238194] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:12.623 [2024-07-15 21:43:03.238233] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1b02400 0 00:21:12.623 [2024-07-15 21:43:03.245155] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:12.623 [2024-07-15 21:43:03.245173] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:12.623 [2024-07-15 21:43:03.245181] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:12.623 [2024-07-15 21:43:03.245187] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:12.623 [2024-07-15 21:43:03.245229] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.245240] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.245247] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b02400) 00:21:12.623 [2024-07-15 21:43:03.245261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:12.623 [2024-07-15 21:43:03.245285] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b623c0, cid 0, qid 0 00:21:12.623 [2024-07-15 21:43:03.253160] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.623 [2024-07-15 21:43:03.253175] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.623 [2024-07-15 21:43:03.253182] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.253190] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b623c0) on tqpair=0x1b02400 00:21:12.623 [2024-07-15 21:43:03.253207] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:12.623 [2024-07-15 21:43:03.253218] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:12.623 [2024-07-15 21:43:03.253227] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:12.623 [2024-07-15 21:43:03.253248] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.253257] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.253264] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b02400) 00:21:12.623 [2024-07-15 21:43:03.253274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.623 [2024-07-15 21:43:03.253296] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b623c0, cid 0, qid 0 00:21:12.623 [2024-07-15 21:43:03.253387] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.623 [2024-07-15 21:43:03.253398] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.623 [2024-07-15 21:43:03.253404] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.253411] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b623c0) on tqpair=0x1b02400 00:21:12.623 [2024-07-15 21:43:03.253419] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:12.623 [2024-07-15 21:43:03.253431] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:12.623 [2024-07-15 21:43:03.253442] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.253449] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.253455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b02400) 00:21:12.623 [2024-07-15 21:43:03.253465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.623 [2024-07-15 21:43:03.253484] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b623c0, cid 0, qid 0 00:21:12.623 [2024-07-15 21:43:03.253560] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.623 [2024-07-15 21:43:03.253571] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.623 [2024-07-15 21:43:03.253577] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.253584] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b623c0) on tqpair=0x1b02400 00:21:12.623 [2024-07-15 21:43:03.253592] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:12.623 [2024-07-15 21:43:03.253606] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:12.623 [2024-07-15 21:43:03.253617] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.253624] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.253630] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b02400) 00:21:12.623 [2024-07-15 21:43:03.253641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.623 [2024-07-15 21:43:03.253659] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b623c0, cid 0, qid 0 00:21:12.623 [2024-07-15 21:43:03.253741] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.623 [2024-07-15 21:43:03.253753] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.623 [2024-07-15 21:43:03.253760] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.253767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b623c0) on tqpair=0x1b02400 00:21:12.623 [2024-07-15 21:43:03.253775] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:12.623 [2024-07-15 21:43:03.253790] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.623 [2024-07-15 21:43:03.253798] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.253805] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b02400) 00:21:12.624 [2024-07-15 21:43:03.253818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.624 [2024-07-15 21:43:03.253838] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b623c0, cid 0, qid 0 00:21:12.624 [2024-07-15 21:43:03.253914] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.624 [2024-07-15 21:43:03.253926] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.624 [2024-07-15 21:43:03.253933] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.253939] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b623c0) on tqpair=0x1b02400 00:21:12.624 [2024-07-15 21:43:03.253950] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:12.624 [2024-07-15 21:43:03.253958] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:12.624 [2024-07-15 21:43:03.253970] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:12.624 [2024-07-15 21:43:03.254080] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:12.624 [2024-07-15 21:43:03.254087] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:12.624 [2024-07-15 21:43:03.254099] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.254106] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.254113] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b02400) 00:21:12.624 [2024-07-15 21:43:03.254123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.624 [2024-07-15 21:43:03.254148] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b623c0, cid 0, qid 0 00:21:12.624 [2024-07-15 21:43:03.254240] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.624 [2024-07-15 21:43:03.254252] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.624 [2024-07-15 21:43:03.254258] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.254265] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b623c0) on tqpair=0x1b02400 00:21:12.624 [2024-07-15 21:43:03.254273] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:12.624 [2024-07-15 21:43:03.254288] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.254296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.254303] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b02400) 00:21:12.624 [2024-07-15 21:43:03.254313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.624 [2024-07-15 21:43:03.254331] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b623c0, cid 0, qid 0 00:21:12.624 [2024-07-15 21:43:03.254407] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.624 [2024-07-15 21:43:03.254419] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.624 [2024-07-15 21:43:03.254426] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.254433] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b623c0) on tqpair=0x1b02400 00:21:12.624 [2024-07-15 21:43:03.254440] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:12.624 [2024-07-15 21:43:03.254448] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:12.624 [2024-07-15 21:43:03.254465] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:12.624 [2024-07-15 21:43:03.254478] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:12.624 [2024-07-15 21:43:03.254492] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.254499] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b02400) 00:21:12.624 [2024-07-15 21:43:03.254509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.624 [2024-07-15 21:43:03.254528] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b623c0, cid 0, qid 0 00:21:12.624 [2024-07-15 21:43:03.254646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:12.624 [2024-07-15 21:43:03.254657] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:12.624 [2024-07-15 21:43:03.254663] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.254670] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b02400): datao=0, datal=4096, cccid=0 00:21:12.624 [2024-07-15 21:43:03.254683] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b623c0) on tqpair(0x1b02400): expected_datao=0, payload_size=4096 00:21:12.624 [2024-07-15 21:43:03.254692] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.254709] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.254717] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.254728] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.624 [2024-07-15 21:43:03.254738] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.624 [2024-07-15 21:43:03.254744] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.254751] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b623c0) on tqpair=0x1b02400 00:21:12.624 [2024-07-15 21:43:03.254762] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:12.624 [2024-07-15 21:43:03.254774] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:12.624 [2024-07-15 21:43:03.254782] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:12.624 [2024-07-15 21:43:03.254789] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:12.624 [2024-07-15 21:43:03.254797] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:12.624 [2024-07-15 21:43:03.254804] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:12.624 [2024-07-15 21:43:03.254818] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:12.624 [2024-07-15 21:43:03.254829] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.254836] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.254843] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b02400) 00:21:12.624 [2024-07-15 21:43:03.254853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:12.624 [2024-07-15 21:43:03.254872] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b623c0, cid 0, qid 0 00:21:12.624 [2024-07-15 21:43:03.254953] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.624 [2024-07-15 21:43:03.254965] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.624 [2024-07-15 21:43:03.254972] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.254978] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b623c0) on tqpair=0x1b02400 00:21:12.624 [2024-07-15 21:43:03.254993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.255001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.255007] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b02400) 00:21:12.624 [2024-07-15 21:43:03.255017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.624 [2024-07-15 21:43:03.255026] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.255033] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.255039] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1b02400) 00:21:12.624 [2024-07-15 21:43:03.255047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.624 [2024-07-15 21:43:03.255056] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.255063] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.255069] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1b02400) 00:21:12.624 [2024-07-15 21:43:03.255078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.624 [2024-07-15 21:43:03.255087] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.255093] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.255099] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b02400) 00:21:12.624 [2024-07-15 21:43:03.255108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.624 [2024-07-15 21:43:03.255116] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:12.624 [2024-07-15 21:43:03.255144] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:12.624 [2024-07-15 21:43:03.255158] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.255165] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b02400) 00:21:12.624 [2024-07-15 21:43:03.255175] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.624 [2024-07-15 21:43:03.255196] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b623c0, cid 0, qid 0 00:21:12.624 [2024-07-15 21:43:03.255206] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b62540, cid 1, qid 0 00:21:12.624 [2024-07-15 21:43:03.255214] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b626c0, cid 2, qid 0 00:21:12.624 [2024-07-15 21:43:03.255221] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b62840, cid 3, qid 0 00:21:12.624 [2024-07-15 21:43:03.255229] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b629c0, cid 4, qid 0 00:21:12.624 [2024-07-15 21:43:03.255340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.624 [2024-07-15 21:43:03.255352] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.624 [2024-07-15 21:43:03.255358] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.255365] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b629c0) on tqpair=0x1b02400 00:21:12.624 [2024-07-15 21:43:03.255374] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:12.624 [2024-07-15 21:43:03.255382] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:12.624 [2024-07-15 21:43:03.255396] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:12.624 [2024-07-15 21:43:03.255412] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:12.624 [2024-07-15 21:43:03.255423] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.255430] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.255436] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b02400) 00:21:12.624 [2024-07-15 21:43:03.255446] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:12.624 [2024-07-15 21:43:03.255465] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b629c0, cid 4, qid 0 00:21:12.624 [2024-07-15 21:43:03.255557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.624 [2024-07-15 21:43:03.255569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.624 [2024-07-15 21:43:03.255576] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.255583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b629c0) on tqpair=0x1b02400 00:21:12.624 [2024-07-15 21:43:03.255646] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:12.624 [2024-07-15 21:43:03.255663] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:12.624 [2024-07-15 21:43:03.255677] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.624 [2024-07-15 21:43:03.255684] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b02400) 00:21:12.624 [2024-07-15 21:43:03.255694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.624 [2024-07-15 21:43:03.255713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b629c0, cid 4, qid 0 00:21:12.624 [2024-07-15 21:43:03.255811] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:12.624 [2024-07-15 21:43:03.255827] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:12.625 [2024-07-15 21:43:03.255834] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.255840] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b02400): datao=0, datal=4096, cccid=4 00:21:12.625 [2024-07-15 21:43:03.255847] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b629c0) on tqpair(0x1b02400): expected_datao=0, payload_size=4096 00:21:12.625 [2024-07-15 21:43:03.255855] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.255871] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.255878] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.300147] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.625 [2024-07-15 21:43:03.300164] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.625 [2024-07-15 21:43:03.300172] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.300179] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b629c0) on tqpair=0x1b02400 00:21:12.625 [2024-07-15 21:43:03.300202] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:12.625 [2024-07-15 21:43:03.300220] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:12.625 [2024-07-15 21:43:03.300238] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:12.625 [2024-07-15 21:43:03.300251] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.300258] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b02400) 00:21:12.625 [2024-07-15 21:43:03.300273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.625 [2024-07-15 21:43:03.300294] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b629c0, cid 4, qid 0 00:21:12.625 [2024-07-15 21:43:03.300399] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:12.625 [2024-07-15 21:43:03.300412] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:12.625 [2024-07-15 21:43:03.300419] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.300425] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b02400): datao=0, datal=4096, cccid=4 00:21:12.625 [2024-07-15 21:43:03.300432] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b629c0) on tqpair(0x1b02400): expected_datao=0, payload_size=4096 00:21:12.625 [2024-07-15 21:43:03.300440] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.300456] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.300464] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.341213] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.625 [2024-07-15 21:43:03.341232] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.625 [2024-07-15 21:43:03.341240] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.341248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b629c0) on tqpair=0x1b02400 00:21:12.625 [2024-07-15 21:43:03.341282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:12.625 [2024-07-15 21:43:03.341301] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:12.625 [2024-07-15 21:43:03.341316] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.341324] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b02400) 00:21:12.625 [2024-07-15 21:43:03.341335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.625 [2024-07-15 21:43:03.341356] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b629c0, cid 4, qid 0 00:21:12.625 [2024-07-15 21:43:03.341468] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:12.625 [2024-07-15 21:43:03.341485] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:12.625 [2024-07-15 21:43:03.341492] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.341498] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b02400): datao=0, datal=4096, cccid=4 00:21:12.625 [2024-07-15 21:43:03.341519] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b629c0) on tqpair(0x1b02400): expected_datao=0, payload_size=4096 00:21:12.625 [2024-07-15 21:43:03.341526] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.341536] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.341544] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.341555] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.625 [2024-07-15 21:43:03.341577] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.625 [2024-07-15 21:43:03.341584] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.341591] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b629c0) on tqpair=0x1b02400 00:21:12.625 [2024-07-15 21:43:03.341605] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:12.625 [2024-07-15 21:43:03.341620] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:12.625 [2024-07-15 21:43:03.341653] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:12.625 [2024-07-15 21:43:03.341665] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:12.625 [2024-07-15 21:43:03.341674] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:12.625 [2024-07-15 21:43:03.341682] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:12.625 [2024-07-15 21:43:03.341691] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:12.625 [2024-07-15 21:43:03.341698] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:12.625 [2024-07-15 21:43:03.341707] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:12.625 [2024-07-15 21:43:03.341726] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.341734] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b02400) 00:21:12.625 [2024-07-15 21:43:03.341745] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.625 [2024-07-15 21:43:03.341756] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.341763] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.341769] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b02400) 00:21:12.625 [2024-07-15 21:43:03.341779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:12.625 [2024-07-15 21:43:03.341803] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b629c0, cid 4, qid 0 00:21:12.625 [2024-07-15 21:43:03.341814] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b62b40, cid 5, qid 0 00:21:12.625 [2024-07-15 21:43:03.341914] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.625 [2024-07-15 21:43:03.341926] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.625 [2024-07-15 21:43:03.341933] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.341940] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b629c0) on tqpair=0x1b02400 00:21:12.625 [2024-07-15 21:43:03.341951] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.625 [2024-07-15 21:43:03.341973] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.625 [2024-07-15 21:43:03.341979] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.341986] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b62b40) on tqpair=0x1b02400 00:21:12.625 [2024-07-15 21:43:03.342001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.342010] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b02400) 00:21:12.625 [2024-07-15 21:43:03.342020] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.625 [2024-07-15 21:43:03.342038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b62b40, cid 5, qid 0 00:21:12.625 [2024-07-15 21:43:03.342148] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.625 [2024-07-15 21:43:03.342162] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.625 [2024-07-15 21:43:03.342169] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.342176] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b62b40) on tqpair=0x1b02400 00:21:12.625 [2024-07-15 21:43:03.342206] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.342214] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b02400) 00:21:12.625 [2024-07-15 21:43:03.342228] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.625 [2024-07-15 21:43:03.342248] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b62b40, cid 5, qid 0 00:21:12.625 [2024-07-15 21:43:03.342343] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.625 [2024-07-15 21:43:03.342356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.625 [2024-07-15 21:43:03.342363] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.625 [2024-07-15 21:43:03.342370] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b62b40) on tqpair=0x1b02400 00:21:12.626 [2024-07-15 21:43:03.342386] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.342395] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b02400) 00:21:12.626 [2024-07-15 21:43:03.342406] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.626 [2024-07-15 21:43:03.342425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b62b40, cid 5, qid 0 00:21:12.626 [2024-07-15 21:43:03.342507] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.626 [2024-07-15 21:43:03.342519] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.626 [2024-07-15 21:43:03.342526] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.342533] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b62b40) on tqpair=0x1b02400 00:21:12.626 [2024-07-15 21:43:03.342557] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.342567] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b02400) 00:21:12.626 [2024-07-15 21:43:03.342577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.626 [2024-07-15 21:43:03.342589] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.342596] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b02400) 00:21:12.626 [2024-07-15 21:43:03.342606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.626 [2024-07-15 21:43:03.342618] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.342625] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1b02400) 00:21:12.626 [2024-07-15 21:43:03.342634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.626 [2024-07-15 21:43:03.342646] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.342654] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b02400) 00:21:12.626 [2024-07-15 21:43:03.342663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.626 [2024-07-15 21:43:03.342684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b62b40, cid 5, qid 0 00:21:12.626 [2024-07-15 21:43:03.342694] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b629c0, cid 4, qid 0 00:21:12.626 [2024-07-15 21:43:03.342702] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b62cc0, cid 6, qid 0 00:21:12.626 [2024-07-15 21:43:03.342710] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b62e40, cid 7, qid 0 00:21:12.626 [2024-07-15 21:43:03.342874] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:12.626 [2024-07-15 21:43:03.342891] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:12.626 [2024-07-15 21:43:03.342905] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.342915] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b02400): datao=0, datal=8192, cccid=5 00:21:12.626 [2024-07-15 21:43:03.342923] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b62b40) on tqpair(0x1b02400): expected_datao=0, payload_size=8192 00:21:12.626 [2024-07-15 21:43:03.342931] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.342949] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.342958] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.342970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:12.626 [2024-07-15 21:43:03.342981] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:12.626 [2024-07-15 21:43:03.342987] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.342994] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b02400): datao=0, datal=512, cccid=4 00:21:12.626 [2024-07-15 21:43:03.343002] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b629c0) on tqpair(0x1b02400): expected_datao=0, payload_size=512 00:21:12.626 [2024-07-15 21:43:03.343009] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.343019] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.343026] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.343035] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:12.626 [2024-07-15 21:43:03.343044] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:12.626 [2024-07-15 21:43:03.343050] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.343057] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b02400): datao=0, datal=512, cccid=6 00:21:12.626 [2024-07-15 21:43:03.343065] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b62cc0) on tqpair(0x1b02400): expected_datao=0, payload_size=512 00:21:12.626 [2024-07-15 21:43:03.343073] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.343082] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.343089] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.343097] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:12.626 [2024-07-15 21:43:03.343106] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:12.626 [2024-07-15 21:43:03.343113] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.343119] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b02400): datao=0, datal=4096, cccid=7 00:21:12.626 [2024-07-15 21:43:03.343127] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b62e40) on tqpair(0x1b02400): expected_datao=0, payload_size=4096 00:21:12.626 [2024-07-15 21:43:03.343135] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.343155] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.343162] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.343171] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.626 [2024-07-15 21:43:03.343181] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.626 [2024-07-15 21:43:03.343187] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.343194] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b62b40) on tqpair=0x1b02400 00:21:12.626 [2024-07-15 21:43:03.343213] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.626 [2024-07-15 21:43:03.343223] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.626 [2024-07-15 21:43:03.343230] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.343237] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b629c0) on tqpair=0x1b02400 00:21:12.626 [2024-07-15 21:43:03.343252] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.626 [2024-07-15 21:43:03.343266] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.626 [2024-07-15 21:43:03.343273] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.343280] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b62cc0) on tqpair=0x1b02400 00:21:12.626 [2024-07-15 21:43:03.343291] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.626 [2024-07-15 21:43:03.343301] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.626 [2024-07-15 21:43:03.343308] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.626 [2024-07-15 21:43:03.343315] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b62e40) on tqpair=0x1b02400 00:21:12.626 ===================================================== 00:21:12.626 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:12.626 ===================================================== 00:21:12.626 Controller Capabilities/Features 00:21:12.626 ================================ 00:21:12.626 Vendor ID: 8086 00:21:12.626 Subsystem Vendor ID: 8086 00:21:12.626 Serial Number: SPDK00000000000001 00:21:12.626 Model Number: SPDK bdev Controller 00:21:12.626 Firmware Version: 24.09 00:21:12.626 Recommended Arb Burst: 6 00:21:12.626 IEEE OUI Identifier: e4 d2 5c 00:21:12.626 Multi-path I/O 00:21:12.626 May have multiple subsystem ports: Yes 00:21:12.626 May have multiple controllers: Yes 00:21:12.626 Associated with SR-IOV VF: No 00:21:12.626 Max Data Transfer Size: 131072 00:21:12.626 Max Number of Namespaces: 32 00:21:12.626 Max Number of I/O Queues: 127 00:21:12.626 NVMe Specification Version (VS): 1.3 00:21:12.626 NVMe Specification Version (Identify): 1.3 00:21:12.626 Maximum Queue Entries: 128 00:21:12.626 Contiguous Queues Required: Yes 00:21:12.626 Arbitration Mechanisms Supported 00:21:12.626 Weighted Round Robin: Not Supported 00:21:12.626 Vendor Specific: Not Supported 00:21:12.626 Reset Timeout: 15000 ms 00:21:12.626 Doorbell Stride: 4 bytes 00:21:12.626 NVM Subsystem Reset: Not Supported 00:21:12.626 Command Sets Supported 00:21:12.626 NVM Command Set: Supported 00:21:12.626 Boot Partition: Not Supported 00:21:12.626 Memory Page Size Minimum: 4096 bytes 00:21:12.626 Memory Page Size Maximum: 4096 bytes 00:21:12.626 Persistent Memory Region: Not Supported 00:21:12.626 Optional Asynchronous Events Supported 00:21:12.626 Namespace Attribute Notices: Supported 00:21:12.626 Firmware Activation Notices: Not Supported 00:21:12.626 ANA Change Notices: Not Supported 00:21:12.626 PLE Aggregate Log Change Notices: Not Supported 00:21:12.626 LBA Status Info Alert Notices: Not Supported 00:21:12.626 EGE Aggregate Log Change Notices: Not Supported 00:21:12.626 Normal NVM Subsystem Shutdown event: Not Supported 00:21:12.626 Zone Descriptor Change Notices: Not Supported 00:21:12.626 Discovery Log Change Notices: Not Supported 00:21:12.626 Controller Attributes 00:21:12.626 128-bit Host Identifier: Supported 00:21:12.626 Non-Operational Permissive Mode: Not Supported 00:21:12.626 NVM Sets: Not Supported 00:21:12.626 Read Recovery Levels: Not Supported 00:21:12.626 Endurance Groups: Not Supported 00:21:12.626 Predictable Latency Mode: Not Supported 00:21:12.626 Traffic Based Keep ALive: Not Supported 00:21:12.626 Namespace Granularity: Not Supported 00:21:12.626 SQ Associations: Not Supported 00:21:12.626 UUID List: Not Supported 00:21:12.626 Multi-Domain Subsystem: Not Supported 00:21:12.626 Fixed Capacity Management: Not Supported 00:21:12.626 Variable Capacity Management: Not Supported 00:21:12.626 Delete Endurance Group: Not Supported 00:21:12.626 Delete NVM Set: Not Supported 00:21:12.626 Extended LBA Formats Supported: Not Supported 00:21:12.626 Flexible Data Placement Supported: Not Supported 00:21:12.626 00:21:12.626 Controller Memory Buffer Support 00:21:12.626 ================================ 00:21:12.626 Supported: No 00:21:12.626 00:21:12.626 Persistent Memory Region Support 00:21:12.626 ================================ 00:21:12.626 Supported: No 00:21:12.626 00:21:12.626 Admin Command Set Attributes 00:21:12.626 ============================ 00:21:12.626 Security Send/Receive: Not Supported 00:21:12.626 Format NVM: Not Supported 00:21:12.626 Firmware Activate/Download: Not Supported 00:21:12.626 Namespace Management: Not Supported 00:21:12.626 Device Self-Test: Not Supported 00:21:12.626 Directives: Not Supported 00:21:12.626 NVMe-MI: Not Supported 00:21:12.626 Virtualization Management: Not Supported 00:21:12.626 Doorbell Buffer Config: Not Supported 00:21:12.626 Get LBA Status Capability: Not Supported 00:21:12.626 Command & Feature Lockdown Capability: Not Supported 00:21:12.626 Abort Command Limit: 4 00:21:12.626 Async Event Request Limit: 4 00:21:12.626 Number of Firmware Slots: N/A 00:21:12.626 Firmware Slot 1 Read-Only: N/A 00:21:12.626 Firmware Activation Without Reset: N/A 00:21:12.626 Multiple Update Detection Support: N/A 00:21:12.626 Firmware Update Granularity: No Information Provided 00:21:12.626 Per-Namespace SMART Log: No 00:21:12.626 Asymmetric Namespace Access Log Page: Not Supported 00:21:12.626 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:12.626 Command Effects Log Page: Supported 00:21:12.626 Get Log Page Extended Data: Supported 00:21:12.626 Telemetry Log Pages: Not Supported 00:21:12.626 Persistent Event Log Pages: Not Supported 00:21:12.626 Supported Log Pages Log Page: May Support 00:21:12.626 Commands Supported & Effects Log Page: Not Supported 00:21:12.626 Feature Identifiers & Effects Log Page:May Support 00:21:12.626 NVMe-MI Commands & Effects Log Page: May Support 00:21:12.626 Data Area 4 for Telemetry Log: Not Supported 00:21:12.626 Error Log Page Entries Supported: 128 00:21:12.626 Keep Alive: Supported 00:21:12.626 Keep Alive Granularity: 10000 ms 00:21:12.626 00:21:12.626 NVM Command Set Attributes 00:21:12.626 ========================== 00:21:12.626 Submission Queue Entry Size 00:21:12.626 Max: 64 00:21:12.626 Min: 64 00:21:12.626 Completion Queue Entry Size 00:21:12.626 Max: 16 00:21:12.626 Min: 16 00:21:12.626 Number of Namespaces: 32 00:21:12.626 Compare Command: Supported 00:21:12.626 Write Uncorrectable Command: Not Supported 00:21:12.627 Dataset Management Command: Supported 00:21:12.627 Write Zeroes Command: Supported 00:21:12.627 Set Features Save Field: Not Supported 00:21:12.627 Reservations: Supported 00:21:12.627 Timestamp: Not Supported 00:21:12.627 Copy: Supported 00:21:12.627 Volatile Write Cache: Present 00:21:12.627 Atomic Write Unit (Normal): 1 00:21:12.627 Atomic Write Unit (PFail): 1 00:21:12.627 Atomic Compare & Write Unit: 1 00:21:12.627 Fused Compare & Write: Supported 00:21:12.627 Scatter-Gather List 00:21:12.627 SGL Command Set: Supported 00:21:12.627 SGL Keyed: Supported 00:21:12.627 SGL Bit Bucket Descriptor: Not Supported 00:21:12.627 SGL Metadata Pointer: Not Supported 00:21:12.627 Oversized SGL: Not Supported 00:21:12.627 SGL Metadata Address: Not Supported 00:21:12.627 SGL Offset: Supported 00:21:12.627 Transport SGL Data Block: Not Supported 00:21:12.627 Replay Protected Memory Block: Not Supported 00:21:12.627 00:21:12.627 Firmware Slot Information 00:21:12.627 ========================= 00:21:12.627 Active slot: 1 00:21:12.627 Slot 1 Firmware Revision: 24.09 00:21:12.627 00:21:12.627 00:21:12.627 Commands Supported and Effects 00:21:12.627 ============================== 00:21:12.627 Admin Commands 00:21:12.627 -------------- 00:21:12.627 Get Log Page (02h): Supported 00:21:12.627 Identify (06h): Supported 00:21:12.627 Abort (08h): Supported 00:21:12.627 Set Features (09h): Supported 00:21:12.627 Get Features (0Ah): Supported 00:21:12.627 Asynchronous Event Request (0Ch): Supported 00:21:12.627 Keep Alive (18h): Supported 00:21:12.627 I/O Commands 00:21:12.627 ------------ 00:21:12.627 Flush (00h): Supported LBA-Change 00:21:12.627 Write (01h): Supported LBA-Change 00:21:12.627 Read (02h): Supported 00:21:12.627 Compare (05h): Supported 00:21:12.627 Write Zeroes (08h): Supported LBA-Change 00:21:12.627 Dataset Management (09h): Supported LBA-Change 00:21:12.627 Copy (19h): Supported LBA-Change 00:21:12.627 00:21:12.627 Error Log 00:21:12.627 ========= 00:21:12.627 00:21:12.627 Arbitration 00:21:12.627 =========== 00:21:12.627 Arbitration Burst: 1 00:21:12.627 00:21:12.627 Power Management 00:21:12.627 ================ 00:21:12.627 Number of Power States: 1 00:21:12.627 Current Power State: Power State #0 00:21:12.627 Power State #0: 00:21:12.627 Max Power: 0.00 W 00:21:12.627 Non-Operational State: Operational 00:21:12.627 Entry Latency: Not Reported 00:21:12.627 Exit Latency: Not Reported 00:21:12.627 Relative Read Throughput: 0 00:21:12.627 Relative Read Latency: 0 00:21:12.627 Relative Write Throughput: 0 00:21:12.627 Relative Write Latency: 0 00:21:12.627 Idle Power: Not Reported 00:21:12.627 Active Power: Not Reported 00:21:12.627 Non-Operational Permissive Mode: Not Supported 00:21:12.627 00:21:12.627 Health Information 00:21:12.627 ================== 00:21:12.627 Critical Warnings: 00:21:12.627 Available Spare Space: OK 00:21:12.627 Temperature: OK 00:21:12.627 Device Reliability: OK 00:21:12.627 Read Only: No 00:21:12.627 Volatile Memory Backup: OK 00:21:12.627 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:12.627 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:12.627 Available Spare: 0% 00:21:12.627 Available Spare Threshold: 0% 00:21:12.627 Life Percentage Used:[2024-07-15 21:43:03.343436] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.627 [2024-07-15 21:43:03.343447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b02400) 00:21:12.627 [2024-07-15 21:43:03.343458] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.627 [2024-07-15 21:43:03.343479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b62e40, cid 7, qid 0 00:21:12.627 [2024-07-15 21:43:03.343574] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.627 [2024-07-15 21:43:03.343587] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.627 [2024-07-15 21:43:03.343594] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.627 [2024-07-15 21:43:03.343601] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b62e40) on tqpair=0x1b02400 00:21:12.627 [2024-07-15 21:43:03.343648] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:12.627 [2024-07-15 21:43:03.343667] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b623c0) on tqpair=0x1b02400 00:21:12.627 [2024-07-15 21:43:03.343677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.627 [2024-07-15 21:43:03.343686] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b62540) on tqpair=0x1b02400 00:21:12.627 [2024-07-15 21:43:03.343694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.627 [2024-07-15 21:43:03.343702] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b626c0) on tqpair=0x1b02400 00:21:12.627 [2024-07-15 21:43:03.343710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.627 [2024-07-15 21:43:03.343718] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b62840) on tqpair=0x1b02400 00:21:12.627 [2024-07-15 21:43:03.343726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:12.627 [2024-07-15 21:43:03.343739] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.627 [2024-07-15 21:43:03.343746] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.627 [2024-07-15 21:43:03.343753] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b02400) 00:21:12.627 [2024-07-15 21:43:03.343763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.627 [2024-07-15 21:43:03.343784] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b62840, cid 3, qid 0 00:21:12.627 [2024-07-15 21:43:03.343912] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.627 [2024-07-15 21:43:03.343924] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.627 [2024-07-15 21:43:03.343931] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.627 [2024-07-15 21:43:03.343938] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b62840) on tqpair=0x1b02400 00:21:12.627 [2024-07-15 21:43:03.343949] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.627 [2024-07-15 21:43:03.343957] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.627 [2024-07-15 21:43:03.343967] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b02400) 00:21:12.627 [2024-07-15 21:43:03.343978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.627 [2024-07-15 21:43:03.344002] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b62840, cid 3, qid 0 00:21:12.627 [2024-07-15 21:43:03.344096] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.627 [2024-07-15 21:43:03.344108] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.627 [2024-07-15 21:43:03.344115] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.627 [2024-07-15 21:43:03.344122] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b62840) on tqpair=0x1b02400 00:21:12.627 [2024-07-15 21:43:03.344130] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:12.627 [2024-07-15 21:43:03.348149] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:12.627 [2024-07-15 21:43:03.348171] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:12.627 [2024-07-15 21:43:03.348192] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:12.627 [2024-07-15 21:43:03.348199] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b02400) 00:21:12.627 [2024-07-15 21:43:03.348210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.627 [2024-07-15 21:43:03.348230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b62840, cid 3, qid 0 00:21:12.627 [2024-07-15 21:43:03.348358] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:12.627 [2024-07-15 21:43:03.348383] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:12.627 [2024-07-15 21:43:03.348390] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:12.627 [2024-07-15 21:43:03.348397] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b62840) on tqpair=0x1b02400 00:21:12.627 [2024-07-15 21:43:03.348410] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:21:12.627 0% 00:21:12.627 Data Units Read: 0 00:21:12.627 Data Units Written: 0 00:21:12.627 Host Read Commands: 0 00:21:12.627 Host Write Commands: 0 00:21:12.627 Controller Busy Time: 0 minutes 00:21:12.627 Power Cycles: 0 00:21:12.627 Power On Hours: 0 hours 00:21:12.627 Unsafe Shutdowns: 0 00:21:12.627 Unrecoverable Media Errors: 0 00:21:12.627 Lifetime Error Log Entries: 0 00:21:12.627 Warning Temperature Time: 0 minutes 00:21:12.627 Critical Temperature Time: 0 minutes 00:21:12.627 00:21:12.627 Number of Queues 00:21:12.627 ================ 00:21:12.627 Number of I/O Submission Queues: 127 00:21:12.627 Number of I/O Completion Queues: 127 00:21:12.627 00:21:12.627 Active Namespaces 00:21:12.627 ================= 00:21:12.627 Namespace ID:1 00:21:12.627 Error Recovery Timeout: Unlimited 00:21:12.627 Command Set Identifier: NVM (00h) 00:21:12.627 Deallocate: Supported 00:21:12.627 Deallocated/Unwritten Error: Not Supported 00:21:12.627 Deallocated Read Value: Unknown 00:21:12.627 Deallocate in Write Zeroes: Not Supported 00:21:12.627 Deallocated Guard Field: 0xFFFF 00:21:12.627 Flush: Supported 00:21:12.627 Reservation: Supported 00:21:12.627 Namespace Sharing Capabilities: Multiple Controllers 00:21:12.627 Size (in LBAs): 131072 (0GiB) 00:21:12.627 Capacity (in LBAs): 131072 (0GiB) 00:21:12.627 Utilization (in LBAs): 131072 (0GiB) 00:21:12.627 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:12.627 EUI64: ABCDEF0123456789 00:21:12.627 UUID: 700b7d76-fff1-4c8c-940a-43bad639aa0f 00:21:12.627 Thin Provisioning: Not Supported 00:21:12.627 Per-NS Atomic Units: Yes 00:21:12.627 Atomic Boundary Size (Normal): 0 00:21:12.627 Atomic Boundary Size (PFail): 0 00:21:12.627 Atomic Boundary Offset: 0 00:21:12.627 Maximum Single Source Range Length: 65535 00:21:12.627 Maximum Copy Length: 65535 00:21:12.627 Maximum Source Range Count: 1 00:21:12.627 NGUID/EUI64 Never Reused: No 00:21:12.627 Namespace Write Protected: No 00:21:12.627 Number of LBA Formats: 1 00:21:12.627 Current LBA Format: LBA Format #00 00:21:12.627 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:12.627 00:21:12.627 21:43:03 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:12.627 21:43:03 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:12.627 21:43:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.627 21:43:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.627 21:43:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.627 21:43:03 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:12.627 21:43:03 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:12.627 21:43:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:12.627 21:43:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:21:12.627 21:43:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:12.627 21:43:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:21:12.627 21:43:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:12.627 21:43:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:12.627 rmmod nvme_tcp 00:21:12.627 rmmod nvme_fabrics 00:21:12.886 rmmod nvme_keyring 00:21:12.886 21:43:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:12.886 21:43:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:21:12.886 21:43:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:21:12.886 21:43:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 397811 ']' 00:21:12.886 21:43:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 397811 00:21:12.886 21:43:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 397811 ']' 00:21:12.886 21:43:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 397811 00:21:12.886 21:43:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:21:12.886 21:43:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:12.886 21:43:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 397811 00:21:12.886 21:43:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:12.886 21:43:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:12.886 21:43:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 397811' 00:21:12.886 killing process with pid 397811 00:21:12.886 21:43:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 397811 00:21:12.886 21:43:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 397811 00:21:12.886 21:43:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:12.886 21:43:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:12.886 21:43:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:12.886 21:43:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:12.886 21:43:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:12.886 21:43:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.886 21:43:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:12.886 21:43:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.422 21:43:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:15.422 00:21:15.422 real 0m5.154s 00:21:15.422 user 0m4.435s 00:21:15.422 sys 0m1.688s 00:21:15.422 21:43:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:15.422 21:43:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:15.422 ************************************ 00:21:15.422 END TEST nvmf_identify 00:21:15.422 ************************************ 00:21:15.422 21:43:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:15.422 21:43:05 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:15.422 21:43:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:15.422 21:43:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:15.422 21:43:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:15.422 ************************************ 00:21:15.422 START TEST nvmf_perf 00:21:15.422 ************************************ 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:15.422 * Looking for test storage... 00:21:15.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:21:15.422 21:43:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:21:16.795 Found 0000:08:00.0 (0x8086 - 0x159b) 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:21:16.795 Found 0000:08:00.1 (0x8086 - 0x159b) 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.795 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:21:16.796 Found net devices under 0000:08:00.0: cvl_0_0 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:21:16.796 Found net devices under 0000:08:00.1: cvl_0_1 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:16.796 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:17.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:17.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:21:17.054 00:21:17.054 --- 10.0.0.2 ping statistics --- 00:21:17.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.054 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:17.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:17.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:21:17.054 00:21:17.054 --- 10.0.0.1 ping statistics --- 00:21:17.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.054 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=399422 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 399422 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 399422 ']' 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:17.054 21:43:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:17.054 [2024-07-15 21:43:07.723576] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:21:17.054 [2024-07-15 21:43:07.723686] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.054 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.054 [2024-07-15 21:43:07.790058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:17.342 [2024-07-15 21:43:07.906926] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.342 [2024-07-15 21:43:07.906985] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.342 [2024-07-15 21:43:07.907001] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.342 [2024-07-15 21:43:07.907013] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.342 [2024-07-15 21:43:07.907025] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.342 [2024-07-15 21:43:07.907159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.342 [2024-07-15 21:43:07.907212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.342 [2024-07-15 21:43:07.907348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:17.342 [2024-07-15 21:43:07.907352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.342 21:43:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:17.342 21:43:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:21:17.342 21:43:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:17.342 21:43:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:17.342 21:43:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:17.342 21:43:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.342 21:43:08 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:17.342 21:43:08 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:20.641 21:43:11 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:20.641 21:43:11 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:20.898 21:43:11 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:84:00.0 00:21:20.898 21:43:11 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:21.155 21:43:11 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:21.155 21:43:11 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:84:00.0 ']' 00:21:21.155 21:43:11 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:21.155 21:43:11 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:21.155 21:43:11 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:21.413 [2024-07-15 21:43:12.051825] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.413 21:43:12 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:21.671 21:43:12 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:21.671 21:43:12 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:21.929 21:43:12 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:21.929 21:43:12 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:22.187 21:43:12 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:22.444 [2024-07-15 21:43:13.075265] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.444 21:43:13 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:22.702 21:43:13 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:84:00.0 ']' 00:21:22.702 21:43:13 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:21:22.702 21:43:13 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:22.702 21:43:13 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:21:24.069 Initializing NVMe Controllers 00:21:24.069 Attached to NVMe Controller at 0000:84:00.0 [8086:0a54] 00:21:24.069 Associating PCIE (0000:84:00.0) NSID 1 with lcore 0 00:21:24.069 Initialization complete. Launching workers. 00:21:24.069 ======================================================== 00:21:24.069 Latency(us) 00:21:24.069 Device Information : IOPS MiB/s Average min max 00:21:24.069 PCIE (0000:84:00.0) NSID 1 from core 0: 84006.00 328.15 380.29 12.77 8230.26 00:21:24.069 ======================================================== 00:21:24.069 Total : 84006.00 328.15 380.29 12.77 8230.26 00:21:24.069 00:21:24.069 21:43:14 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:24.069 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.000 Initializing NVMe Controllers 00:21:25.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:25.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:25.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:25.000 Initialization complete. Launching workers. 00:21:25.000 ======================================================== 00:21:25.000 Latency(us) 00:21:25.000 Device Information : IOPS MiB/s Average min max 00:21:25.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 110.94 0.43 9032.39 134.00 45486.14 00:21:25.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 50.97 0.20 19774.48 7950.18 50878.49 00:21:25.000 ======================================================== 00:21:25.000 Total : 161.91 0.63 12414.16 134.00 50878.49 00:21:25.000 00:21:25.000 21:43:15 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:25.000 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.415 Initializing NVMe Controllers 00:21:26.415 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:26.415 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:26.415 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:26.415 Initialization complete. Launching workers. 00:21:26.415 ======================================================== 00:21:26.415 Latency(us) 00:21:26.415 Device Information : IOPS MiB/s Average min max 00:21:26.415 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8534.99 33.34 3761.86 773.74 7607.86 00:21:26.415 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3939.00 15.39 8161.57 6428.48 15840.96 00:21:26.416 ======================================================== 00:21:26.416 Total : 12473.99 48.73 5151.19 773.74 15840.96 00:21:26.416 00:21:26.416 21:43:16 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:26.416 21:43:16 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:26.416 21:43:16 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:26.416 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.940 Initializing NVMe Controllers 00:21:28.940 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:28.940 Controller IO queue size 128, less than required. 00:21:28.940 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:28.940 Controller IO queue size 128, less than required. 00:21:28.940 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:28.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:28.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:28.940 Initialization complete. Launching workers. 00:21:28.940 ======================================================== 00:21:28.940 Latency(us) 00:21:28.940 Device Information : IOPS MiB/s Average min max 00:21:28.940 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1905.44 476.36 68344.73 39136.42 110053.61 00:21:28.940 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 580.98 145.25 224135.52 76414.62 340780.65 00:21:28.940 ======================================================== 00:21:28.940 Total : 2486.42 621.61 104747.09 39136.42 340780.65 00:21:28.940 00:21:28.941 21:43:19 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:28.941 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.941 No valid NVMe controllers or AIO or URING devices found 00:21:28.941 Initializing NVMe Controllers 00:21:28.941 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:28.941 Controller IO queue size 128, less than required. 00:21:28.941 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:28.941 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:28.941 Controller IO queue size 128, less than required. 00:21:28.941 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:28.941 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:28.941 WARNING: Some requested NVMe devices were skipped 00:21:28.941 21:43:19 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:28.941 EAL: No free 2048 kB hugepages reported on node 1 00:21:32.220 Initializing NVMe Controllers 00:21:32.221 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:32.221 Controller IO queue size 128, less than required. 00:21:32.221 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:32.221 Controller IO queue size 128, less than required. 00:21:32.221 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:32.221 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:32.221 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:32.221 Initialization complete. Launching workers. 00:21:32.221 00:21:32.221 ==================== 00:21:32.221 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:32.221 TCP transport: 00:21:32.221 polls: 12443 00:21:32.221 idle_polls: 8893 00:21:32.221 sock_completions: 3550 00:21:32.221 nvme_completions: 6227 00:21:32.221 submitted_requests: 9416 00:21:32.221 queued_requests: 1 00:21:32.221 00:21:32.221 ==================== 00:21:32.221 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:32.221 TCP transport: 00:21:32.221 polls: 9987 00:21:32.221 idle_polls: 5844 00:21:32.221 sock_completions: 4143 00:21:32.221 nvme_completions: 7141 00:21:32.221 submitted_requests: 10656 00:21:32.221 queued_requests: 1 00:21:32.221 ======================================================== 00:21:32.221 Latency(us) 00:21:32.221 Device Information : IOPS MiB/s Average min max 00:21:32.221 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1556.50 389.12 84138.42 53795.89 130201.68 00:21:32.221 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1785.00 446.25 72027.23 38053.55 102273.84 00:21:32.221 ======================================================== 00:21:32.221 Total : 3341.49 835.37 77668.73 38053.55 130201.68 00:21:32.221 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:32.221 rmmod nvme_tcp 00:21:32.221 rmmod nvme_fabrics 00:21:32.221 rmmod nvme_keyring 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 399422 ']' 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 399422 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 399422 ']' 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 399422 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 399422 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 399422' 00:21:32.221 killing process with pid 399422 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 399422 00:21:32.221 21:43:22 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 399422 00:21:33.590 21:43:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:33.590 21:43:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:33.590 21:43:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:33.590 21:43:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:33.590 21:43:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:33.590 21:43:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.590 21:43:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.590 21:43:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.498 21:43:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:35.498 00:21:35.498 real 0m20.510s 00:21:35.498 user 1m4.299s 00:21:35.498 sys 0m4.752s 00:21:35.498 21:43:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:35.498 21:43:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:35.498 ************************************ 00:21:35.498 END TEST nvmf_perf 00:21:35.498 ************************************ 00:21:35.758 21:43:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:35.758 21:43:26 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:35.758 21:43:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:35.758 21:43:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:35.758 21:43:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:35.758 ************************************ 00:21:35.758 START TEST nvmf_fio_host 00:21:35.758 ************************************ 00:21:35.758 21:43:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:35.758 * Looking for test storage... 00:21:35.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:35.758 21:43:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:35.758 21:43:26 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:35.758 21:43:26 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:35.758 21:43:26 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:35.758 21:43:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.758 21:43:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.758 21:43:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.758 21:43:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:35.758 21:43:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.758 21:43:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:35.758 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:35.758 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:35.758 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:35.758 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:35.758 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:35.758 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:35.758 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:35.758 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:35.758 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:35.758 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:21:35.759 21:43:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:21:37.662 Found 0000:08:00.0 (0x8086 - 0x159b) 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:21:37.662 Found 0000:08:00.1 (0x8086 - 0x159b) 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:21:37.662 Found net devices under 0000:08:00.0: cvl_0_0 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:21:37.662 Found net devices under 0000:08:00.1: cvl_0_1 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:37.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:37.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:21:37.662 00:21:37.662 --- 10.0.0.2 ping statistics --- 00:21:37.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.662 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:21:37.662 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:37.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:37.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:21:37.663 00:21:37.663 --- 10.0.0.1 ping statistics --- 00:21:37.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.663 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:21:37.663 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:37.663 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:21:37.663 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:37.663 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:37.663 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:37.663 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:37.663 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:37.663 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:37.663 21:43:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:37.663 21:43:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:37.663 21:43:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:37.663 21:43:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:37.663 21:43:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.663 21:43:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=402409 00:21:37.663 21:43:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:37.663 21:43:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:37.663 21:43:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 402409 00:21:37.663 21:43:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 402409 ']' 00:21:37.663 21:43:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.663 21:43:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:37.663 21:43:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.663 21:43:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:37.663 21:43:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.663 [2024-07-15 21:43:28.254053] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:21:37.663 [2024-07-15 21:43:28.254150] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.663 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.663 [2024-07-15 21:43:28.318158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:37.663 [2024-07-15 21:43:28.435198] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.663 [2024-07-15 21:43:28.435256] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.663 [2024-07-15 21:43:28.435283] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:37.663 [2024-07-15 21:43:28.435302] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:37.663 [2024-07-15 21:43:28.435321] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.663 [2024-07-15 21:43:28.435391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.663 [2024-07-15 21:43:28.435485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.663 [2024-07-15 21:43:28.435562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:37.663 [2024-07-15 21:43:28.435571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.920 21:43:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:37.920 21:43:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:21:37.920 21:43:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:38.177 [2024-07-15 21:43:28.822596] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.177 21:43:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:38.177 21:43:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:38.177 21:43:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.177 21:43:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:38.434 Malloc1 00:21:38.435 21:43:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:38.692 21:43:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:38.950 21:43:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:39.206 [2024-07-15 21:43:29.813248] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.206 21:43:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:39.464 21:43:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:39.464 21:43:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:39.464 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:39.464 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:39.464 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:39.464 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:39.464 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:39.464 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:39.464 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:39.464 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:39.464 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:39.464 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:39.464 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:39.464 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:39.464 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:39.464 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:39.464 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:39.464 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:39.464 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:39.464 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:39.464 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:39.464 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:39.464 21:43:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:39.722 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:39.722 fio-3.35 00:21:39.722 Starting 1 thread 00:21:39.722 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.246 00:21:42.246 test: (groupid=0, jobs=1): err= 0: pid=402751: Mon Jul 15 21:43:32 2024 00:21:42.246 read: IOPS=9273, BW=36.2MiB/s (38.0MB/s)(72.7MiB/2006msec) 00:21:42.246 slat (usec): min=2, max=244, avg= 2.94, stdev= 2.55 00:21:42.246 clat (usec): min=2970, max=15079, avg=7570.89, stdev=699.75 00:21:42.246 lat (usec): min=3007, max=15082, avg=7573.83, stdev=699.60 00:21:42.246 clat percentiles (usec): 00:21:42.246 | 1.00th=[ 6063], 5.00th=[ 6587], 10.00th=[ 6849], 20.00th=[ 7111], 00:21:42.246 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7701], 00:21:42.246 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8225], 95.00th=[ 8455], 00:21:42.246 | 99.00th=[ 8979], 99.50th=[10683], 99.90th=[13960], 99.95th=[14222], 00:21:42.246 | 99.99th=[15139] 00:21:42.246 bw ( KiB/s): min=35568, max=37784, per=99.49%, avg=36904.00, stdev=1176.27, samples=3 00:21:42.246 iops : min= 8892, max= 9446, avg=9226.00, stdev=294.07, samples=3 00:21:42.246 write: IOPS=9280, BW=36.3MiB/s (38.0MB/s)(72.7MiB/2006msec); 0 zone resets 00:21:42.246 slat (usec): min=2, max=194, avg= 3.00, stdev= 1.74 00:21:42.246 clat (usec): min=2265, max=12294, avg=6159.15, stdev=570.96 00:21:42.246 lat (usec): min=2278, max=12297, avg=6162.14, stdev=570.99 00:21:42.246 clat percentiles (usec): 00:21:42.246 | 1.00th=[ 4883], 5.00th=[ 5407], 10.00th=[ 5538], 20.00th=[ 5800], 00:21:42.246 | 30.00th=[ 5932], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6259], 00:21:42.246 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6783], 95.00th=[ 6980], 00:21:42.246 | 99.00th=[ 7439], 99.50th=[ 7898], 99.90th=[10945], 99.95th=[11731], 00:21:42.246 | 99.99th=[12256] 00:21:42.246 bw ( KiB/s): min=36384, max=37440, per=99.56%, avg=36960.00, stdev=534.51, samples=3 00:21:42.246 iops : min= 9096, max= 9360, avg=9240.00, stdev=133.63, samples=3 00:21:42.246 lat (msec) : 4=0.23%, 10=99.35%, 20=0.41% 00:21:42.246 cpu : usr=69.98%, sys=28.28%, ctx=76, majf=0, minf=41 00:21:42.246 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:42.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:42.246 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:42.246 issued rwts: total=18603,18617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:42.246 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:42.246 00:21:42.246 Run status group 0 (all jobs): 00:21:42.246 READ: bw=36.2MiB/s (38.0MB/s), 36.2MiB/s-36.2MiB/s (38.0MB/s-38.0MB/s), io=72.7MiB (76.2MB), run=2006-2006msec 00:21:42.246 WRITE: bw=36.3MiB/s (38.0MB/s), 36.3MiB/s-36.3MiB/s (38.0MB/s-38.0MB/s), io=72.7MiB (76.3MB), run=2006-2006msec 00:21:42.246 21:43:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:42.246 21:43:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:42.246 21:43:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:42.246 21:43:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:42.246 21:43:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:42.246 21:43:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:42.246 21:43:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:42.246 21:43:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:42.247 21:43:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:42.247 21:43:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:42.247 21:43:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:42.247 21:43:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:42.247 21:43:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:42.247 21:43:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:42.247 21:43:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:42.247 21:43:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:42.247 21:43:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:42.247 21:43:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:42.247 21:43:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:42.247 21:43:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:42.247 21:43:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:42.247 21:43:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:42.247 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:42.247 fio-3.35 00:21:42.247 Starting 1 thread 00:21:42.247 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.811 00:21:44.811 test: (groupid=0, jobs=1): err= 0: pid=403017: Mon Jul 15 21:43:35 2024 00:21:44.811 read: IOPS=8323, BW=130MiB/s (136MB/s)(261MiB/2007msec) 00:21:44.811 slat (usec): min=3, max=124, avg= 4.58, stdev= 1.65 00:21:44.811 clat (usec): min=2068, max=17924, avg=8962.98, stdev=2122.28 00:21:44.811 lat (usec): min=2073, max=17928, avg=8967.56, stdev=2122.37 00:21:44.811 clat percentiles (usec): 00:21:44.811 | 1.00th=[ 4948], 5.00th=[ 5800], 10.00th=[ 6325], 20.00th=[ 7111], 00:21:44.811 | 30.00th=[ 7701], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9372], 00:21:44.811 | 70.00th=[ 9896], 80.00th=[10814], 90.00th=[11731], 95.00th=[12387], 00:21:44.811 | 99.00th=[14877], 99.50th=[15926], 99.90th=[17171], 99.95th=[17433], 00:21:44.811 | 99.99th=[17957] 00:21:44.811 bw ( KiB/s): min=58208, max=77632, per=50.92%, avg=67808.00, stdev=9548.27, samples=4 00:21:44.811 iops : min= 3638, max= 4852, avg=4238.00, stdev=596.77, samples=4 00:21:44.811 write: IOPS=4911, BW=76.7MiB/s (80.5MB/s)(138MiB/1804msec); 0 zone resets 00:21:44.811 slat (usec): min=32, max=203, avg=40.79, stdev= 5.28 00:21:44.811 clat (usec): min=4513, max=17464, avg=11376.73, stdev=1743.81 00:21:44.811 lat (usec): min=4555, max=17507, avg=11417.52, stdev=1744.23 00:21:44.811 clat percentiles (usec): 00:21:44.811 | 1.00th=[ 7767], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9896], 00:21:44.811 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11207], 60.00th=[11600], 00:21:44.811 | 70.00th=[12125], 80.00th=[12911], 90.00th=[13829], 95.00th=[14484], 00:21:44.811 | 99.00th=[15795], 99.50th=[16319], 99.90th=[17171], 99.95th=[17171], 00:21:44.811 | 99.99th=[17433] 00:21:44.811 bw ( KiB/s): min=61952, max=79968, per=89.60%, avg=70408.00, stdev=9421.03, samples=4 00:21:44.811 iops : min= 3872, max= 4998, avg=4400.50, stdev=588.81, samples=4 00:21:44.811 lat (msec) : 4=0.20%, 10=53.54%, 20=46.27% 00:21:44.811 cpu : usr=83.61%, sys=15.55%, ctx=16, majf=0, minf=67 00:21:44.811 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:44.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:44.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:44.811 issued rwts: total=16705,8860,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:44.811 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:44.811 00:21:44.811 Run status group 0 (all jobs): 00:21:44.811 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=261MiB (274MB), run=2007-2007msec 00:21:44.811 WRITE: bw=76.7MiB/s (80.5MB/s), 76.7MiB/s-76.7MiB/s (80.5MB/s-80.5MB/s), io=138MiB (145MB), run=1804-1804msec 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:44.811 rmmod nvme_tcp 00:21:44.811 rmmod nvme_fabrics 00:21:44.811 rmmod nvme_keyring 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 402409 ']' 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 402409 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 402409 ']' 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 402409 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 402409 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 402409' 00:21:44.811 killing process with pid 402409 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 402409 00:21:44.811 21:43:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 402409 00:21:45.070 21:43:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:45.070 21:43:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:45.070 21:43:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:45.070 21:43:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:45.070 21:43:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:45.070 21:43:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.070 21:43:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.070 21:43:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.975 21:43:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:46.975 00:21:46.975 real 0m11.410s 00:21:46.975 user 0m33.877s 00:21:46.975 sys 0m3.517s 00:21:46.975 21:43:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:46.975 21:43:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.975 ************************************ 00:21:46.975 END TEST nvmf_fio_host 00:21:46.975 ************************************ 00:21:47.234 21:43:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:47.234 21:43:37 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:47.234 21:43:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:47.235 21:43:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:47.235 21:43:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:47.235 ************************************ 00:21:47.235 START TEST nvmf_failover 00:21:47.235 ************************************ 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:47.235 * Looking for test storage... 00:21:47.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:21:47.235 21:43:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:21:49.144 Found 0000:08:00.0 (0x8086 - 0x159b) 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:21:49.144 Found 0000:08:00.1 (0x8086 - 0x159b) 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.144 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:21:49.145 Found net devices under 0000:08:00.0: cvl_0_0 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:21:49.145 Found net devices under 0000:08:00.1: cvl_0_1 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:49.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:21:49.145 00:21:49.145 --- 10.0.0.2 ping statistics --- 00:21:49.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.145 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:21:49.145 00:21:49.145 --- 10.0.0.1 ping statistics --- 00:21:49.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.145 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=404713 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 404713 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 404713 ']' 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:49.145 21:43:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:49.145 [2024-07-15 21:43:39.670119] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:21:49.145 [2024-07-15 21:43:39.670217] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.145 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.145 [2024-07-15 21:43:39.734445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:49.145 [2024-07-15 21:43:39.850626] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.145 [2024-07-15 21:43:39.850679] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.145 [2024-07-15 21:43:39.850695] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.145 [2024-07-15 21:43:39.850709] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.145 [2024-07-15 21:43:39.850720] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.145 [2024-07-15 21:43:39.850797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.145 [2024-07-15 21:43:39.850850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:49.145 [2024-07-15 21:43:39.850853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.404 21:43:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:49.404 21:43:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:49.404 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:49.404 21:43:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:49.404 21:43:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:49.404 21:43:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.404 21:43:39 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:49.662 [2024-07-15 21:43:40.261265] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.662 21:43:40 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:49.920 Malloc0 00:21:49.920 21:43:40 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:50.177 21:43:40 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:50.435 21:43:41 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:50.692 [2024-07-15 21:43:41.470994] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.950 21:43:41 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:51.208 [2024-07-15 21:43:41.763856] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:51.208 21:43:41 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:51.466 [2024-07-15 21:43:42.056686] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:51.467 21:43:42 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=404944 00:21:51.467 21:43:42 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:51.467 21:43:42 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:51.467 21:43:42 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 404944 /var/tmp/bdevperf.sock 00:21:51.467 21:43:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 404944 ']' 00:21:51.467 21:43:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:51.467 21:43:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:51.467 21:43:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:51.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:51.467 21:43:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:51.467 21:43:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:51.724 21:43:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:51.724 21:43:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:51.724 21:43:42 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:52.288 NVMe0n1 00:21:52.288 21:43:42 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:52.852 00:21:52.852 21:43:43 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=405048 00:21:52.852 21:43:43 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:52.852 21:43:43 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:53.779 21:43:44 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:54.037 [2024-07-15 21:43:44.627878] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.037 [2024-07-15 21:43:44.627939] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.037 [2024-07-15 21:43:44.627961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.037 [2024-07-15 21:43:44.627975] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.037 [2024-07-15 21:43:44.627987] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.037 [2024-07-15 21:43:44.627998] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.037 [2024-07-15 21:43:44.628010] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.037 [2024-07-15 21:43:44.628021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.037 [2024-07-15 21:43:44.628033] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.037 [2024-07-15 21:43:44.628044] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.037 [2024-07-15 21:43:44.628055] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.037 [2024-07-15 21:43:44.628066] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.037 [2024-07-15 21:43:44.628078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628102] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628114] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628125] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628145] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628159] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628171] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628188] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628211] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628222] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628233] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628283] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628298] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628310] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628321] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628349] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628362] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628374] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628386] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628408] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628420] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628431] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628443] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628454] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 [2024-07-15 21:43:44.628465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1caa6b0 is same with the state(5) to be set 00:21:54.038 21:43:44 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:57.315 21:43:47 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:57.573 00:21:57.573 21:43:48 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:57.831 [2024-07-15 21:43:48.465812] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cabc30 is same with the state(5) to be set 00:21:57.831 [2024-07-15 21:43:48.465872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cabc30 is same with the state(5) to be set 00:21:57.831 [2024-07-15 21:43:48.465899] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cabc30 is same with the state(5) to be set 00:21:57.831 [2024-07-15 21:43:48.465912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cabc30 is same with the state(5) to be set 00:21:57.831 [2024-07-15 21:43:48.465924] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cabc30 is same with the state(5) to be set 00:21:57.831 [2024-07-15 21:43:48.465935] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cabc30 is same with the state(5) to be set 00:21:57.831 [2024-07-15 21:43:48.465947] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cabc30 is same with the state(5) to be set 00:21:57.831 [2024-07-15 21:43:48.465959] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cabc30 is same with the state(5) to be set 00:21:57.831 21:43:48 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:01.111 21:43:51 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:01.112 [2024-07-15 21:43:51.760768] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.112 21:43:51 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:02.045 21:43:52 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:02.303 21:43:53 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 405048 00:22:08.858 0 00:22:08.858 21:43:58 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 404944 00:22:08.858 21:43:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 404944 ']' 00:22:08.858 21:43:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 404944 00:22:08.858 21:43:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:08.858 21:43:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:08.858 21:43:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 404944 00:22:08.858 21:43:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:08.858 21:43:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:08.858 21:43:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 404944' 00:22:08.858 killing process with pid 404944 00:22:08.858 21:43:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 404944 00:22:08.858 21:43:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 404944 00:22:08.858 21:43:58 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:08.858 [2024-07-15 21:43:42.123002] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:22:08.858 [2024-07-15 21:43:42.123114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid404944 ] 00:22:08.858 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.858 [2024-07-15 21:43:42.178543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.858 [2024-07-15 21:43:42.278052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.858 Running I/O for 15 seconds... 00:22:08.858 [2024-07-15 21:43:44.629251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.629289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.629316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.629331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.629347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.629361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.629376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.629390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.629405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.629418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.629433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.629446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.629462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.629476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.629490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.629504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.629518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.629532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.629547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.629560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.629575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.629588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.629609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.629623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.629638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.629651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.629666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.629679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.629694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.629707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.629722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.629735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.629750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.629764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.629779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.629792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.629807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.629821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.629836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.629849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.629864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.629877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.629892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.629906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.629921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.629935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.629950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.858 [2024-07-15 21:43:44.629967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.629982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.629996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.630010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.630023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.630038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.630051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.630066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.630079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.630094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.630107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.630123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.630136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.630160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.630174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.630189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.630202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.630217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.630230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.630247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.630260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.630275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.630289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.858 [2024-07-15 21:43:44.630308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.858 [2024-07-15 21:43:44.630322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.630340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.859 [2024-07-15 21:43:44.630354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.630369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.859 [2024-07-15 21:43:44.630382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.630397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.859 [2024-07-15 21:43:44.630410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.630425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.859 [2024-07-15 21:43:44.630438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.630453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.859 [2024-07-15 21:43:44.630466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.630481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.859 [2024-07-15 21:43:44.630495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.630510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.859 [2024-07-15 21:43:44.630524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.630538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.859 [2024-07-15 21:43:44.630551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.630566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.859 [2024-07-15 21:43:44.630579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.630594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.859 [2024-07-15 21:43:44.630607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.630622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.859 [2024-07-15 21:43:44.630635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.630650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.630664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.630679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.630695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.630711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.630724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.630739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.630753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.630768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.630781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.630796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.630809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.630824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.630838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.630853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.630866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.630881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.630895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.630912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.630925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.630940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.630954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.630968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.630982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.630997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.631974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.631988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.632002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.632015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.632031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.859 [2024-07-15 21:43:44.632044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.632080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.859 [2024-07-15 21:43:44.632096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80784 len:8 PRP1 0x0 PRP2 0x0 00:22:08.859 [2024-07-15 21:43:44.632109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.859 [2024-07-15 21:43:44.632133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.632151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.632163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80792 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.632181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.632200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.632211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.632223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80800 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.632239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.632252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.632263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.632275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80808 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.632287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.632300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.632311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.632322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80816 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.632334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.632347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.632357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.632368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80824 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.632381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.632393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.632404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.632415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80832 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.632427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.632440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.632451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.632462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80840 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.632474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.632487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.632497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.632509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80848 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.632521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.632534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.632545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.632556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80856 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.632572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.632586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.632597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.632611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80864 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.632624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.632637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.632648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.632659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80872 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.632671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.632684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.632694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.632705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80880 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.632718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.632730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.632741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.632752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80888 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.632764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.632777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.632788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.632799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80896 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.632811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.632824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.632835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.632846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80904 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.632858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.632871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.632881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.632892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80912 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.632904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.632917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.632928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.632939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80920 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.632956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.632969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.632983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.632995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80928 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.633007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.633020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.633031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.633042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80936 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.633055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.633068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.633078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.633089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80944 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.633102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.633114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.633128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.633150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80952 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.633165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.633178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.633190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.633201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80960 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.633213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.633226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.633237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.633248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80968 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.633260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.633273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.633284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.633295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80976 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.633307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.633320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.633331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.633342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80984 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.633358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.633375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.633386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.633397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80992 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.633409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.633422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.633433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.633444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81000 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.633456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.633469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.633480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.633491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81008 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.633503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.633516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.633527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.633538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81016 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.633550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.633563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.633573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.633584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81024 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.633597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.633609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.633620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.633630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80376 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.633643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.633655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.860 [2024-07-15 21:43:44.633666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.860 [2024-07-15 21:43:44.633677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80384 len:8 PRP1 0x0 PRP2 0x0 00:22:08.860 [2024-07-15 21:43:44.633689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.633741] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1324b20 was disconnected and freed. reset controller. 00:22:08.860 [2024-07-15 21:43:44.633761] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:08.860 [2024-07-15 21:43:44.633805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.860 [2024-07-15 21:43:44.633826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.633841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.860 [2024-07-15 21:43:44.633854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.633868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.860 [2024-07-15 21:43:44.633881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.633894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.860 [2024-07-15 21:43:44.633907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.860 [2024-07-15 21:43:44.633919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:08.860 [2024-07-15 21:43:44.637539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:08.860 [2024-07-15 21:43:44.637575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f4430 (9): Bad file descriptor 00:22:08.860 [2024-07-15 21:43:44.673838] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:08.861 [2024-07-15 21:43:48.467030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.467983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.467998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.468011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.468026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.468042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.468058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.468071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.468086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.468099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.468114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.468127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.468149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.468164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.468178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.468192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.468207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.861 [2024-07-15 21:43:48.468220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.468235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.861 [2024-07-15 21:43:48.468249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.468264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.861 [2024-07-15 21:43:48.468277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.468293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.861 [2024-07-15 21:43:48.468306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.468321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.861 [2024-07-15 21:43:48.468335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.468349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:106840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.861 [2024-07-15 21:43:48.468363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.468378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.861 [2024-07-15 21:43:48.468391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.468409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.861 [2024-07-15 21:43:48.468423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.468438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.861 [2024-07-15 21:43:48.468451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.468466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.861 [2024-07-15 21:43:48.468479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.468494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:106880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.861 [2024-07-15 21:43:48.468508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.468523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.861 [2024-07-15 21:43:48.468536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.468551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.861 [2024-07-15 21:43:48.468564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.861 [2024-07-15 21:43:48.468579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.861 [2024-07-15 21:43:48.468593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.468607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.468620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.468636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.468649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.468664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.468677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.468692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.468705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.468720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.468733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.468748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.468765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.468780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.468794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.468809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.468822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.468837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.468850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.468865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.468878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.468893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.468906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.468921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.468934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.468949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.468962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.468977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.468990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.469975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.469990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.470003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.470017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.470030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.470045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.470058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.470073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.470086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.470100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.470113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.470129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.470149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.470165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.470179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.470194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.470207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.470225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.470239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.470254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.470267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.470282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.470295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.470310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.470324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.470338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.862 [2024-07-15 21:43:48.470351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.862 [2024-07-15 21:43:48.470366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:48.470379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:48.470394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:48.470413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:48.470429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:48.470442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:48.470457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:48.470470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:48.470485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:48.470498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:48.470512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:48.470525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:48.470566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.863 [2024-07-15 21:43:48.470583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107448 len:8 PRP1 0x0 PRP2 0x0 00:22:08.863 [2024-07-15 21:43:48.470596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:48.470618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.863 [2024-07-15 21:43:48.470631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.863 [2024-07-15 21:43:48.470643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107456 len:8 PRP1 0x0 PRP2 0x0 00:22:08.863 [2024-07-15 21:43:48.470655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:48.470668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.863 [2024-07-15 21:43:48.470679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.863 [2024-07-15 21:43:48.470690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107464 len:8 PRP1 0x0 PRP2 0x0 00:22:08.863 [2024-07-15 21:43:48.470702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:48.470715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.863 [2024-07-15 21:43:48.470726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.863 [2024-07-15 21:43:48.470737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107472 len:8 PRP1 0x0 PRP2 0x0 00:22:08.863 [2024-07-15 21:43:48.470749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:48.470762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.863 [2024-07-15 21:43:48.470773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.863 [2024-07-15 21:43:48.470784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107480 len:8 PRP1 0x0 PRP2 0x0 00:22:08.863 [2024-07-15 21:43:48.470796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:48.470809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.863 [2024-07-15 21:43:48.470819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.863 [2024-07-15 21:43:48.470830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107488 len:8 PRP1 0x0 PRP2 0x0 00:22:08.863 [2024-07-15 21:43:48.470843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:48.470855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.863 [2024-07-15 21:43:48.470866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.863 [2024-07-15 21:43:48.470877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107496 len:8 PRP1 0x0 PRP2 0x0 00:22:08.863 [2024-07-15 21:43:48.470889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:48.470902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.863 [2024-07-15 21:43:48.470917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.863 [2024-07-15 21:43:48.470928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106792 len:8 PRP1 0x0 PRP2 0x0 00:22:08.863 [2024-07-15 21:43:48.470941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:48.470954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.863 [2024-07-15 21:43:48.470964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.863 [2024-07-15 21:43:48.470976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106800 len:8 PRP1 0x0 PRP2 0x0 00:22:08.863 [2024-07-15 21:43:48.470991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:48.471045] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12ee440 was disconnected and freed. reset controller. 00:22:08.863 [2024-07-15 21:43:48.471066] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:08.863 [2024-07-15 21:43:48.471102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.863 [2024-07-15 21:43:48.471119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:48.471134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.863 [2024-07-15 21:43:48.471154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:48.471169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.863 [2024-07-15 21:43:48.471182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:48.471196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.863 [2024-07-15 21:43:48.471209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:48.471222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:08.863 [2024-07-15 21:43:48.471278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f4430 (9): Bad file descriptor 00:22:08.863 [2024-07-15 21:43:48.474765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:08.863 [2024-07-15 21:43:48.513647] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:08.863 [2024-07-15 21:43:53.058759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.863 [2024-07-15 21:43:53.058824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.058851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.058867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.058883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.058896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.058911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.058924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.058938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.058951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.058966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.058979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.863 [2024-07-15 21:43:53.059765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.863 [2024-07-15 21:43:53.059778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.059794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.059807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.059822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.059835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.059849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.059862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.059877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-15 21:43:53.059890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.059904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-15 21:43:53.059917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.059931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-15 21:43:53.059944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.059958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-15 21:43:53.059971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.059986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-15 21:43:53.059999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-15 21:43:53.060026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-15 21:43:53.060056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-15 21:43:53.060084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-15 21:43:53.060111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-15 21:43:53.060145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-15 21:43:53.060189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-15 21:43:53.060229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-15 21:43:53.060256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-15 21:43:53.060284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-15 21:43:53.060311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-15 21:43:53.060338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-15 21:43:53.060365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-15 21:43:53.060392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-15 21:43:53.060419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:51776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-15 21:43:53.060449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-15 21:43:53.060476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:51792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-15 21:43:53.060503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.864 [2024-07-15 21:43:53.060530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.060557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.060584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.060612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.060639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.060666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.060693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.060721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.060748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.060775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.060815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.060842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.060869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.060896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.060923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.060950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.060977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.060992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.061004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.061019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.061031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.061045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.061058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.061072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.061085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.061099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.061112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.061126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.061160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.061178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.061191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.061218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:52544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.061232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.061246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.061259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.061274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.061287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.061301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.061314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.864 [2024-07-15 21:43:53.061328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.864 [2024-07-15 21:43:53.061341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.061356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:52584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-15 21:43:53.061368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.061383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-15 21:43:53.061395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.061410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-15 21:43:53.061422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.061437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.865 [2024-07-15 21:43:53.061450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.061478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.061493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52616 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.061505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.061522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.061533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.061547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52624 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.061560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.061573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.061583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.061594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52632 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.061606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.061618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.061629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.061639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51808 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.061651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.061664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.061674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.061685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51816 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.061697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.061709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.061719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.061730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51824 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.061742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.061754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.061764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.061775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51832 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.061787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.061799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.061810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.061820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51840 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.061832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.061845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.061855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.061866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51848 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.061878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.061891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.061904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.061915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51856 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.061927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.061940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.061950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.061961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51864 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.061972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.061985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.061996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.062007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51872 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.062019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.062031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.062042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.062053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51880 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.062065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.062077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.062087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.062098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51888 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.062110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.062122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.062132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.062160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51896 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.062174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.062187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.062198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.062221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51904 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.062233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.062246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.062257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.062267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51912 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.062279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.062295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.062307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.062318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51920 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.062330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.062342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.062353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.062363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51928 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.062375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.062388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.062399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.062409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51936 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.062421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.062434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.062444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.062455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51944 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.062467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.062479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.062490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.062506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51952 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.062519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.062532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.062542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.062553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51960 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.062565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.062577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.062588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.062598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51968 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.062610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.062623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.062633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.062644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51976 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.062659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.062672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.062682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.062693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51984 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.062705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.062717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.062727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.062738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51992 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.062750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.062763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.062773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.062784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52000 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.062796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.062809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.062820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.062830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52008 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.062842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.062854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.062865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.062876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52016 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.062888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.062900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.062910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.062921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52024 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.062933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.062945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.062956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.062966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52032 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.062978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.062990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.865 [2024-07-15 21:43:53.063001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.865 [2024-07-15 21:43:53.063014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52040 len:8 PRP1 0x0 PRP2 0x0 00:22:08.865 [2024-07-15 21:43:53.063026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.865 [2024-07-15 21:43:53.063039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.866 [2024-07-15 21:43:53.063050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.866 [2024-07-15 21:43:53.063060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52048 len:8 PRP1 0x0 PRP2 0x0 00:22:08.866 [2024-07-15 21:43:53.063072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.866 [2024-07-15 21:43:53.063084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.866 [2024-07-15 21:43:53.063095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.866 [2024-07-15 21:43:53.063105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52056 len:8 PRP1 0x0 PRP2 0x0 00:22:08.866 [2024-07-15 21:43:53.063117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.866 [2024-07-15 21:43:53.063130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:08.866 [2024-07-15 21:43:53.063163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:08.866 [2024-07-15 21:43:53.063177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52064 len:8 PRP1 0x0 PRP2 0x0 00:22:08.866 [2024-07-15 21:43:53.063189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.866 [2024-07-15 21:43:53.063257] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12f01d0 was disconnected and freed. reset controller. 00:22:08.866 [2024-07-15 21:43:53.063277] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:08.866 [2024-07-15 21:43:53.063325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.866 [2024-07-15 21:43:53.063343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.866 [2024-07-15 21:43:53.063357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.866 [2024-07-15 21:43:53.063388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.866 [2024-07-15 21:43:53.063421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.866 [2024-07-15 21:43:53.063433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.866 [2024-07-15 21:43:53.063446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:08.866 [2024-07-15 21:43:53.063458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:08.866 [2024-07-15 21:43:53.063470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:08.866 [2024-07-15 21:43:53.066960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:08.866 [2024-07-15 21:43:53.066997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f4430 (9): Bad file descriptor 00:22:08.866 [2024-07-15 21:43:53.096879] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:08.866 00:22:08.866 Latency(us) 00:22:08.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.866 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:08.866 Verification LBA range: start 0x0 length 0x4000 00:22:08.866 NVMe0n1 : 15.01 8971.40 35.04 245.02 0.00 13859.12 534.00 15631.55 00:22:08.866 =================================================================================================================== 00:22:08.866 Total : 8971.40 35.04 245.02 0.00 13859.12 534.00 15631.55 00:22:08.866 Received shutdown signal, test time was about 15.000000 seconds 00:22:08.866 00:22:08.866 Latency(us) 00:22:08.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.866 =================================================================================================================== 00:22:08.866 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:08.866 21:43:58 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:08.866 21:43:58 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:08.866 21:43:58 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:08.866 21:43:58 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=406442 00:22:08.866 21:43:58 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:08.866 21:43:58 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 406442 /var/tmp/bdevperf.sock 00:22:08.866 21:43:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 406442 ']' 00:22:08.866 21:43:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:08.866 21:43:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:08.866 21:43:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:08.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:08.866 21:43:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:08.866 21:43:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:08.866 21:43:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:08.866 21:43:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:08.866 21:43:59 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:08.866 [2024-07-15 21:43:59.331470] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:08.866 21:43:59 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:08.866 [2024-07-15 21:43:59.624282] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:08.866 21:43:59 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:09.429 NVMe0n1 00:22:09.429 21:44:00 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:09.686 00:22:09.686 21:44:00 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:10.249 00:22:10.249 21:44:00 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:10.249 21:44:00 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:10.506 21:44:01 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:10.819 21:44:01 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:14.108 21:44:04 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:14.108 21:44:04 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:14.108 21:44:04 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=407037 00:22:14.108 21:44:04 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:14.108 21:44:04 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 407037 00:22:15.040 0 00:22:15.040 21:44:05 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:15.040 [2024-07-15 21:43:58.797211] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:22:15.040 [2024-07-15 21:43:58.797319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406442 ] 00:22:15.040 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.040 [2024-07-15 21:43:58.853895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.040 [2024-07-15 21:43:58.952111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.040 [2024-07-15 21:44:01.320453] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:15.040 [2024-07-15 21:44:01.320539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.040 [2024-07-15 21:44:01.320559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.040 [2024-07-15 21:44:01.320578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.040 [2024-07-15 21:44:01.320591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.040 [2024-07-15 21:44:01.320605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.040 [2024-07-15 21:44:01.320618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.040 [2024-07-15 21:44:01.320632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.040 [2024-07-15 21:44:01.320645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.041 [2024-07-15 21:44:01.320659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:15.041 [2024-07-15 21:44:01.320703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:15.041 [2024-07-15 21:44:01.320735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fb430 (9): Bad file descriptor 00:22:15.041 [2024-07-15 21:44:01.341392] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:15.041 Running I/O for 1 seconds... 00:22:15.041 00:22:15.041 Latency(us) 00:22:15.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.041 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:15.041 Verification LBA range: start 0x0 length 0x4000 00:22:15.041 NVMe0n1 : 1.01 9177.22 35.85 0.00 0.00 13882.22 2900.57 12184.84 00:22:15.041 =================================================================================================================== 00:22:15.041 Total : 9177.22 35.85 0.00 0.00 13882.22 2900.57 12184.84 00:22:15.041 21:44:05 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:15.041 21:44:05 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:15.299 21:44:06 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:15.865 21:44:06 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:15.865 21:44:06 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:16.122 21:44:06 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:16.380 21:44:06 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:19.659 21:44:09 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:19.659 21:44:09 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:19.659 21:44:10 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 406442 00:22:19.659 21:44:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 406442 ']' 00:22:19.659 21:44:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 406442 00:22:19.659 21:44:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:19.659 21:44:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:19.659 21:44:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 406442 00:22:19.659 21:44:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:19.659 21:44:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:19.659 21:44:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 406442' 00:22:19.659 killing process with pid 406442 00:22:19.659 21:44:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 406442 00:22:19.659 21:44:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 406442 00:22:19.917 21:44:10 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:19.917 21:44:10 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:20.176 21:44:10 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:20.176 21:44:10 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:20.176 21:44:10 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:20.176 21:44:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:20.176 21:44:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:22:20.176 21:44:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:20.176 21:44:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:22:20.176 21:44:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:20.176 21:44:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:20.176 rmmod nvme_tcp 00:22:20.176 rmmod nvme_fabrics 00:22:20.176 rmmod nvme_keyring 00:22:20.176 21:44:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:20.176 21:44:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:22:20.176 21:44:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:22:20.176 21:44:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 404713 ']' 00:22:20.176 21:44:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 404713 00:22:20.176 21:44:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 404713 ']' 00:22:20.176 21:44:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 404713 00:22:20.176 21:44:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:20.176 21:44:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:20.176 21:44:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 404713 00:22:20.176 21:44:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:20.176 21:44:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:20.176 21:44:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 404713' 00:22:20.176 killing process with pid 404713 00:22:20.176 21:44:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 404713 00:22:20.176 21:44:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 404713 00:22:20.435 21:44:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:20.435 21:44:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:20.435 21:44:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:20.435 21:44:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:20.435 21:44:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:20.435 21:44:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.435 21:44:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.435 21:44:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.974 21:44:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:22.974 00:22:22.974 real 0m35.345s 00:22:22.974 user 2m7.154s 00:22:22.974 sys 0m5.469s 00:22:22.974 21:44:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:22.974 21:44:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:22.974 ************************************ 00:22:22.974 END TEST nvmf_failover 00:22:22.974 ************************************ 00:22:22.974 21:44:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:22.974 21:44:13 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:22.974 21:44:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:22.974 21:44:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:22.974 21:44:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:22.974 ************************************ 00:22:22.974 START TEST nvmf_host_discovery 00:22:22.974 ************************************ 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:22.974 * Looking for test storage... 00:22:22.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.974 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:22.975 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:22.975 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:22.975 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.975 21:44:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.975 21:44:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.975 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:22.975 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:22.975 21:44:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:22:22.975 21:44:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.349 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:24.349 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:22:24.349 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:24.349 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:24.349 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:24.349 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:24.349 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:24.349 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:22:24.349 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:24.349 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:22:24.349 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:22:24.349 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:22:24.349 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:22:24.349 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:22:24.349 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:22:24.349 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:22:24.350 Found 0000:08:00.0 (0x8086 - 0x159b) 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:22:24.350 Found 0000:08:00.1 (0x8086 - 0x159b) 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:22:24.350 Found net devices under 0000:08:00.0: cvl_0_0 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:22:24.350 Found net devices under 0000:08:00.1: cvl_0_1 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:24.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:24.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:22:24.350 00:22:24.350 --- 10.0.0.2 ping statistics --- 00:22:24.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.350 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:24.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:24.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:22:24.350 00:22:24.350 --- 10.0.0.1 ping statistics --- 00:22:24.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.350 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:24.350 21:44:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:24.350 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:24.350 21:44:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:24.350 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:24.350 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.350 21:44:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=409058 00:22:24.350 21:44:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:24.350 21:44:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 409058 00:22:24.350 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 409058 ']' 00:22:24.350 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.350 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:24.350 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.350 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:24.350 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.350 [2024-07-15 21:44:15.070418] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:22:24.350 [2024-07-15 21:44:15.070508] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.350 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.350 [2024-07-15 21:44:15.134098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.608 [2024-07-15 21:44:15.249951] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.608 [2024-07-15 21:44:15.250016] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.608 [2024-07-15 21:44:15.250032] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:24.608 [2024-07-15 21:44:15.250046] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:24.608 [2024-07-15 21:44:15.250058] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.608 [2024-07-15 21:44:15.250087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.609 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:24.609 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:22:24.609 21:44:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:24.609 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:24.609 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.609 21:44:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.609 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:24.609 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.609 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.609 [2024-07-15 21:44:15.386455] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.609 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.609 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:24.609 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.609 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.609 [2024-07-15 21:44:15.394583] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:24.609 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.609 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:24.609 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.609 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.867 null0 00:22:24.867 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.867 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:24.867 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.867 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.867 null1 00:22:24.867 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.867 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:24.867 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.867 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.867 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.867 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:24.867 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=409112 00:22:24.867 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 409112 /tmp/host.sock 00:22:24.867 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 409112 ']' 00:22:24.867 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:24.867 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:24.867 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:24.867 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:24.867 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:24.867 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.867 [2024-07-15 21:44:15.473990] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:22:24.867 [2024-07-15 21:44:15.474080] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid409112 ] 00:22:24.867 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.867 [2024-07-15 21:44:15.533356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.867 [2024-07-15 21:44:15.650048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:25.125 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.383 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:25.383 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:25.383 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.383 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.383 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.383 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:25.383 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:25.383 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:25.383 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.383 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:25.383 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.383 21:44:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:25.383 21:44:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.383 [2024-07-15 21:44:16.044276] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:25.383 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:25.384 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.640 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:22:25.640 21:44:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:22:26.204 [2024-07-15 21:44:16.813229] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:26.204 [2024-07-15 21:44:16.813256] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:26.204 [2024-07-15 21:44:16.813278] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:26.204 [2024-07-15 21:44:16.900546] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:26.461 [2024-07-15 21:44:17.085027] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:26.461 [2024-07-15 21:44:17.085050] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:26.461 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:26.461 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:26.461 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:26.461 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:26.461 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:26.461 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.461 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:26.461 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.461 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:26.461 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.461 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.461 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:26.461 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:26.461 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:26.461 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:26.461 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:26.461 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:26.461 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:26.461 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:26.461 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:26.461 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.461 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.461 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:26.461 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:26.718 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.718 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:26.718 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:26.718 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:26.718 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:26.718 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:26.718 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:26.718 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:26.718 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:26.718 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:26.718 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.718 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:26.718 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.718 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:26.718 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:26.718 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.718 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:22:26.718 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:26.718 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:26.718 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.719 [2024-07-15 21:44:17.484288] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:26.719 [2024-07-15 21:44:17.484516] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:26.719 [2024-07-15 21:44:17.484554] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:26.719 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.975 [2024-07-15 21:44:17.571207] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:26.975 21:44:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:22:27.231 [2024-07-15 21:44:17.870527] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:27.231 [2024-07-15 21:44:17.870550] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:27.231 [2024-07-15 21:44:17.870559] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:28.161 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:28.161 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:28.161 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:28.161 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:28.161 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:28.161 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.161 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:28.161 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.161 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:28.161 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.161 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:28.161 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:28.161 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:28.161 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:28.161 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:28.161 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:28.161 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:28.161 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:28.161 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.162 [2024-07-15 21:44:18.708019] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:28.162 [2024-07-15 21:44:18.708049] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:28.162 [2024-07-15 21:44:18.716700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:28.162 [2024-07-15 21:44:18.716739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.162 [2024-07-15 21:44:18.716758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:28.162 [2024-07-15 21:44:18.716771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.162 [2024-07-15 21:44:18.716785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:28.162 [2024-07-15 21:44:18.716798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.162 [2024-07-15 21:44:18.716813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:28.162 [2024-07-15 21:44:18.716825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.162 [2024-07-15 21:44:18.716839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68ec0 is same with the state(5) to be set 00:22:28.162 [2024-07-15 21:44:18.726705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c68ec0 (9): Bad file descriptor 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.162 [2024-07-15 21:44:18.736747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:28.162 [2024-07-15 21:44:18.736874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:28.162 [2024-07-15 21:44:18.736905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c68ec0 with addr=10.0.0.2, port=4420 00:22:28.162 [2024-07-15 21:44:18.736922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68ec0 is same with the state(5) to be set 00:22:28.162 [2024-07-15 21:44:18.736943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c68ec0 (9): Bad file descriptor 00:22:28.162 [2024-07-15 21:44:18.736962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:28.162 [2024-07-15 21:44:18.736975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:28.162 [2024-07-15 21:44:18.736989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:28.162 [2024-07-15 21:44:18.737007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:28.162 [2024-07-15 21:44:18.746820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:28.162 [2024-07-15 21:44:18.746915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:28.162 [2024-07-15 21:44:18.746940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c68ec0 with addr=10.0.0.2, port=4420 00:22:28.162 [2024-07-15 21:44:18.746955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68ec0 is same with the state(5) to be set 00:22:28.162 [2024-07-15 21:44:18.746975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c68ec0 (9): Bad file descriptor 00:22:28.162 [2024-07-15 21:44:18.746993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:28.162 [2024-07-15 21:44:18.747005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:28.162 [2024-07-15 21:44:18.747017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:28.162 [2024-07-15 21:44:18.747035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:28.162 [2024-07-15 21:44:18.756890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:28.162 [2024-07-15 21:44:18.757000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:28.162 [2024-07-15 21:44:18.757025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c68ec0 with addr=10.0.0.2, port=4420 00:22:28.162 [2024-07-15 21:44:18.757039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68ec0 is same with the state(5) to be set 00:22:28.162 [2024-07-15 21:44:18.757060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c68ec0 (9): Bad file descriptor 00:22:28.162 [2024-07-15 21:44:18.757078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:28.162 [2024-07-15 21:44:18.757090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:28.162 [2024-07-15 21:44:18.757102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:28.162 [2024-07-15 21:44:18.757125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:28.162 [2024-07-15 21:44:18.766961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:28.162 [2024-07-15 21:44:18.767061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:28.162 [2024-07-15 21:44:18.767085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c68ec0 with addr=10.0.0.2, port=4420 00:22:28.162 [2024-07-15 21:44:18.767100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68ec0 is same with the state(5) to be set 00:22:28.162 [2024-07-15 21:44:18.767121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c68ec0 (9): Bad file descriptor 00:22:28.162 [2024-07-15 21:44:18.767147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:28.162 [2024-07-15 21:44:18.767161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:28.162 [2024-07-15 21:44:18.767174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:28.162 [2024-07-15 21:44:18.767192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:28.162 [2024-07-15 21:44:18.777031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:28.162 [2024-07-15 21:44:18.777151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:28.162 [2024-07-15 21:44:18.777176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c68ec0 with addr=10.0.0.2, port=4420 00:22:28.162 [2024-07-15 21:44:18.777203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68ec0 is same with the state(5) to be set 00:22:28.162 [2024-07-15 21:44:18.777223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c68ec0 (9): Bad file descriptor 00:22:28.162 [2024-07-15 21:44:18.777241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:28.162 [2024-07-15 21:44:18.777254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:28.162 [2024-07-15 21:44:18.777266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:28.162 [2024-07-15 21:44:18.777295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.162 [2024-07-15 21:44:18.787097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:28.162 [2024-07-15 21:44:18.787193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:28.162 [2024-07-15 21:44:18.787217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c68ec0 with addr=10.0.0.2, port=4420 00:22:28.162 [2024-07-15 21:44:18.787232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c68ec0 is same with the state(5) to be set 00:22:28.162 [2024-07-15 21:44:18.787252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c68ec0 (9): Bad file descriptor 00:22:28.162 [2024-07-15 21:44:18.787270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:28.162 [2024-07-15 21:44:18.787287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:28.162 [2024-07-15 21:44:18.787299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:28.162 [2024-07-15 21:44:18.787316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:28.162 [2024-07-15 21:44:18.794369] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:28.162 [2024-07-15 21:44:18.794396] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:28.162 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:28.163 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:28.163 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:28.163 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:28.163 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:28.163 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.163 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.163 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:28.163 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:28.163 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.163 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:22:28.163 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:28.163 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:28.163 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:28.163 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:28.163 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:28.163 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:28.163 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:28.163 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:28.163 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:28.163 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.163 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:28.163 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.163 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:28.163 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.419 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:22:28.419 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:28.419 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:28.419 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:28.419 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:28.419 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:28.419 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:28.419 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:28.419 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:28.419 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:28.419 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:28.419 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.419 21:44:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:28.419 21:44:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.419 21:44:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.419 21:44:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:28.419 21:44:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:28.419 21:44:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:28.419 21:44:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:28.419 21:44:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:28.419 21:44:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.419 21:44:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.348 [2024-07-15 21:44:20.092078] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:29.348 [2024-07-15 21:44:20.092110] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:29.348 [2024-07-15 21:44:20.092131] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:29.605 [2024-07-15 21:44:20.178432] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:29.863 [2024-07-15 21:44:20.440607] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:29.863 [2024-07-15 21:44:20.440645] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:29.863 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.863 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:29.863 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:29.863 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:29.863 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:29.863 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.863 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:29.863 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.863 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:29.863 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.863 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.863 request: 00:22:29.863 { 00:22:29.864 "name": "nvme", 00:22:29.864 "trtype": "tcp", 00:22:29.864 "traddr": "10.0.0.2", 00:22:29.864 "adrfam": "ipv4", 00:22:29.864 "trsvcid": "8009", 00:22:29.864 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:29.864 "wait_for_attach": true, 00:22:29.864 "method": "bdev_nvme_start_discovery", 00:22:29.864 "req_id": 1 00:22:29.864 } 00:22:29.864 Got JSON-RPC error response 00:22:29.864 response: 00:22:29.864 { 00:22:29.864 "code": -17, 00:22:29.864 "message": "File exists" 00:22:29.864 } 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.864 request: 00:22:29.864 { 00:22:29.864 "name": "nvme_second", 00:22:29.864 "trtype": "tcp", 00:22:29.864 "traddr": "10.0.0.2", 00:22:29.864 "adrfam": "ipv4", 00:22:29.864 "trsvcid": "8009", 00:22:29.864 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:29.864 "wait_for_attach": true, 00:22:29.864 "method": "bdev_nvme_start_discovery", 00:22:29.864 "req_id": 1 00:22:29.864 } 00:22:29.864 Got JSON-RPC error response 00:22:29.864 response: 00:22:29.864 { 00:22:29.864 "code": -17, 00:22:29.864 "message": "File exists" 00:22:29.864 } 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.864 21:44:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.235 [2024-07-15 21:44:21.647919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.235 [2024-07-15 21:44:21.647959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca72c0 with addr=10.0.0.2, port=8010 00:22:31.235 [2024-07-15 21:44:21.647983] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:31.235 [2024-07-15 21:44:21.647998] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:31.235 [2024-07-15 21:44:21.648010] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:32.168 [2024-07-15 21:44:22.650385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.168 [2024-07-15 21:44:22.650434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca72c0 with addr=10.0.0.2, port=8010 00:22:32.168 [2024-07-15 21:44:22.650459] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:32.168 [2024-07-15 21:44:22.650474] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:32.168 [2024-07-15 21:44:22.650486] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:33.100 [2024-07-15 21:44:23.652667] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:33.100 request: 00:22:33.100 { 00:22:33.100 "name": "nvme_second", 00:22:33.100 "trtype": "tcp", 00:22:33.100 "traddr": "10.0.0.2", 00:22:33.100 "adrfam": "ipv4", 00:22:33.100 "trsvcid": "8010", 00:22:33.100 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:33.100 "wait_for_attach": false, 00:22:33.100 "attach_timeout_ms": 3000, 00:22:33.100 "method": "bdev_nvme_start_discovery", 00:22:33.100 "req_id": 1 00:22:33.100 } 00:22:33.100 Got JSON-RPC error response 00:22:33.100 response: 00:22:33.100 { 00:22:33.100 "code": -110, 00:22:33.100 "message": "Connection timed out" 00:22:33.100 } 00:22:33.100 21:44:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:33.100 21:44:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:33.100 21:44:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:33.100 21:44:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:33.100 21:44:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:33.100 21:44:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:33.100 21:44:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:33.100 21:44:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:33.100 21:44:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.100 21:44:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:33.100 21:44:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.100 21:44:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:33.100 21:44:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.100 21:44:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:33.100 21:44:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:33.100 21:44:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 409112 00:22:33.100 21:44:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:33.101 21:44:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:33.101 21:44:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:22:33.101 21:44:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:33.101 21:44:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:22:33.101 21:44:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:33.101 21:44:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:33.101 rmmod nvme_tcp 00:22:33.101 rmmod nvme_fabrics 00:22:33.101 rmmod nvme_keyring 00:22:33.101 21:44:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:33.101 21:44:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:22:33.101 21:44:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:22:33.101 21:44:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 409058 ']' 00:22:33.101 21:44:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 409058 00:22:33.101 21:44:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 409058 ']' 00:22:33.101 21:44:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 409058 00:22:33.101 21:44:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:22:33.101 21:44:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:33.101 21:44:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 409058 00:22:33.101 21:44:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:33.101 21:44:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:33.101 21:44:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 409058' 00:22:33.101 killing process with pid 409058 00:22:33.101 21:44:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 409058 00:22:33.101 21:44:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 409058 00:22:33.360 21:44:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:33.360 21:44:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:33.360 21:44:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:33.360 21:44:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:33.360 21:44:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:33.360 21:44:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.360 21:44:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:33.360 21:44:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.268 21:44:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:35.268 00:22:35.268 real 0m12.841s 00:22:35.268 user 0m19.243s 00:22:35.268 sys 0m2.385s 00:22:35.268 21:44:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:35.268 21:44:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.269 ************************************ 00:22:35.269 END TEST nvmf_host_discovery 00:22:35.269 ************************************ 00:22:35.269 21:44:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:35.534 21:44:26 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:35.534 21:44:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:35.534 21:44:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:35.534 21:44:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:35.534 ************************************ 00:22:35.534 START TEST nvmf_host_multipath_status 00:22:35.534 ************************************ 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:35.534 * Looking for test storage... 00:22:35.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:22:35.534 21:44:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:22:37.502 Found 0000:08:00.0 (0x8086 - 0x159b) 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:22:37.502 Found 0000:08:00.1 (0x8086 - 0x159b) 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:22:37.502 Found net devices under 0000:08:00.0: cvl_0_0 00:22:37.502 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:22:37.503 Found net devices under 0000:08:00.1: cvl_0_1 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:37.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:37.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:22:37.503 00:22:37.503 --- 10.0.0.2 ping statistics --- 00:22:37.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.503 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:37.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:37.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:22:37.503 00:22:37.503 --- 10.0.0.1 ping statistics --- 00:22:37.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.503 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=411543 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 411543 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 411543 ']' 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:37.503 21:44:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:37.503 [2024-07-15 21:44:28.007891] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:22:37.503 [2024-07-15 21:44:28.007996] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.503 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.503 [2024-07-15 21:44:28.073339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:37.503 [2024-07-15 21:44:28.192789] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.503 [2024-07-15 21:44:28.192848] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.503 [2024-07-15 21:44:28.192874] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.503 [2024-07-15 21:44:28.192894] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.503 [2024-07-15 21:44:28.192913] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.503 [2024-07-15 21:44:28.192977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.503 [2024-07-15 21:44:28.192985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.761 21:44:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:37.761 21:44:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:37.761 21:44:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:37.761 21:44:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:37.761 21:44:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:37.761 21:44:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.761 21:44:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=411543 00:22:37.761 21:44:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:38.019 [2024-07-15 21:44:28.601722] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.019 21:44:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:38.277 Malloc0 00:22:38.277 21:44:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:38.535 21:44:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:38.792 21:44:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:39.049 [2024-07-15 21:44:29.708207] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.049 21:44:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:39.307 [2024-07-15 21:44:29.960903] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:39.307 21:44:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=411765 00:22:39.307 21:44:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:39.307 21:44:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:39.307 21:44:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 411765 /var/tmp/bdevperf.sock 00:22:39.307 21:44:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 411765 ']' 00:22:39.307 21:44:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:39.307 21:44:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:39.307 21:44:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:39.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:39.307 21:44:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:39.307 21:44:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:39.564 21:44:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:39.564 21:44:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:39.564 21:44:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:39.822 21:44:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:40.386 Nvme0n1 00:22:40.386 21:44:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:40.949 Nvme0n1 00:22:40.949 21:44:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:40.949 21:44:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:42.843 21:44:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:42.843 21:44:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:43.100 21:44:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:43.664 21:44:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:44.596 21:44:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:44.596 21:44:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:44.596 21:44:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.596 21:44:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:44.853 21:44:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.853 21:44:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:44.853 21:44:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.853 21:44:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:45.110 21:44:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:45.110 21:44:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:45.110 21:44:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.110 21:44:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:45.367 21:44:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:45.367 21:44:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:45.367 21:44:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.367 21:44:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:45.624 21:44:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:45.624 21:44:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:45.624 21:44:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.624 21:44:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:45.882 21:44:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:45.882 21:44:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:45.882 21:44:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.882 21:44:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:46.444 21:44:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.444 21:44:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:46.444 21:44:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:46.701 21:44:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:46.958 21:44:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:47.940 21:44:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:47.940 21:44:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:47.940 21:44:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.940 21:44:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:48.197 21:44:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:48.197 21:44:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:48.197 21:44:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.197 21:44:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:48.453 21:44:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:48.453 21:44:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:48.453 21:44:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.453 21:44:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:48.710 21:44:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:48.710 21:44:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:48.710 21:44:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.710 21:44:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:48.968 21:44:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:48.968 21:44:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:48.968 21:44:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.968 21:44:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:49.531 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:49.531 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:49.531 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.531 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:49.789 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:49.789 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:49.789 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:50.046 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:50.302 21:44:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:51.232 21:44:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:51.232 21:44:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:51.232 21:44:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.232 21:44:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:51.489 21:44:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.489 21:44:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:51.489 21:44:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.489 21:44:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:51.747 21:44:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:51.747 21:44:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:51.747 21:44:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.747 21:44:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:52.311 21:44:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.311 21:44:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:52.311 21:44:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.311 21:44:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:52.569 21:44:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.569 21:44:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:52.569 21:44:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.569 21:44:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:52.826 21:44:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.826 21:44:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:52.826 21:44:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.826 21:44:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:53.082 21:44:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:53.082 21:44:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:53.082 21:44:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:53.339 21:44:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:53.596 21:44:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:54.967 21:44:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:54.967 21:44:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:54.967 21:44:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.967 21:44:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:54.967 21:44:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.967 21:44:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:54.967 21:44:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.967 21:44:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:55.225 21:44:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:55.225 21:44:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:55.225 21:44:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.225 21:44:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:55.482 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.482 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:55.483 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.483 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:55.740 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.740 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:55.740 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.740 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:55.998 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.998 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:55.998 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.998 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:56.287 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:56.287 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:56.287 21:44:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:56.569 21:44:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:56.569 21:44:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:57.942 21:44:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:57.942 21:44:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:57.942 21:44:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.942 21:44:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:57.942 21:44:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:57.942 21:44:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:57.942 21:44:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.942 21:44:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:58.200 21:44:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:58.200 21:44:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:58.200 21:44:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.200 21:44:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:58.456 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.456 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:58.456 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.456 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:58.714 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.714 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:58.714 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.714 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:58.971 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:58.971 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:58.971 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.971 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:59.228 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:59.228 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:59.228 21:44:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:59.486 21:44:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:59.743 21:44:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:00.675 21:44:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:00.675 21:44:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:00.675 21:44:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.675 21:44:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:01.241 21:44:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:01.241 21:44:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:01.241 21:44:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.241 21:44:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:01.499 21:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.499 21:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:01.499 21:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.499 21:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:01.757 21:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.757 21:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:01.757 21:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.757 21:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:02.014 21:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.014 21:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:02.014 21:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.014 21:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:02.272 21:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:02.272 21:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:02.272 21:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.272 21:44:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:02.530 21:44:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.530 21:44:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:02.787 21:44:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:02.787 21:44:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:02.787 21:44:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:03.045 21:44:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:04.416 21:44:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:04.416 21:44:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:04.416 21:44:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.416 21:44:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:04.416 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.416 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:04.416 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.416 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:04.673 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.673 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:04.673 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.673 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:04.931 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.931 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:04.931 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.931 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:05.495 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.495 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:05.495 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.495 21:44:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:05.495 21:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.495 21:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:05.495 21:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.495 21:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:06.060 21:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:06.060 21:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:06.060 21:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:06.060 21:44:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:06.318 21:44:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:07.689 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:07.689 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:07.689 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.689 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:07.689 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:07.689 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:07.689 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.689 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:07.947 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.947 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:07.947 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.947 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:08.205 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.205 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:08.205 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.205 21:44:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:08.777 21:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.777 21:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:08.777 21:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.777 21:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:08.777 21:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.777 21:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:08.778 21:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.778 21:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:09.343 21:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:09.343 21:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:09.343 21:44:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:09.600 21:45:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:09.857 21:45:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:10.789 21:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:10.789 21:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:10.789 21:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.789 21:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:11.102 21:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.102 21:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:11.102 21:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.102 21:45:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:11.360 21:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.360 21:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:11.360 21:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.360 21:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:11.617 21:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.617 21:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:11.617 21:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.617 21:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:11.874 21:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.874 21:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:11.874 21:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.874 21:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:12.439 21:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:12.439 21:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:12.439 21:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.439 21:45:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:12.697 21:45:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:12.697 21:45:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:12.697 21:45:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:12.954 21:45:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:13.211 21:45:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:14.143 21:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:14.143 21:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:14.143 21:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.143 21:45:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:14.401 21:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.401 21:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:14.401 21:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.401 21:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:14.965 21:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:14.965 21:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:14.965 21:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.965 21:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:14.965 21:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.965 21:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:14.965 21:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.965 21:45:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:15.528 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.528 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:15.528 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.528 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:15.784 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.784 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:15.785 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.785 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:16.042 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:16.042 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 411765 00:23:16.042 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 411765 ']' 00:23:16.042 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 411765 00:23:16.042 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:23:16.042 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:16.042 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 411765 00:23:16.042 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:16.042 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:16.042 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 411765' 00:23:16.042 killing process with pid 411765 00:23:16.042 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 411765 00:23:16.042 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 411765 00:23:16.042 Connection closed with partial response: 00:23:16.042 00:23:16.042 00:23:16.302 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 411765 00:23:16.302 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:16.302 [2024-07-15 21:44:30.022059] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:23:16.302 [2024-07-15 21:44:30.022198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid411765 ] 00:23:16.302 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.302 [2024-07-15 21:44:30.071071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.302 [2024-07-15 21:44:30.168729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.302 Running I/O for 90 seconds... 00:23:16.302 [2024-07-15 21:44:47.078065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.078124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.078193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.078214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.078238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.078254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.078276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.078292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.078314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.078329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.078351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.078366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.078388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.078403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.078424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.078439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.078461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.078476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.078498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.078513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.078535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.078560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.078583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.078599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.078620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.078635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.078657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.078672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.078694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.078709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.078731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.078746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.078767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.078782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.078804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.078819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.078841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.078856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.078879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.078895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.078916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.078931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.078952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.078967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.078989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.079004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.079030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.079046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.079650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.302 [2024-07-15 21:44:47.079672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.079703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.079719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.079743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.079759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.079783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.079798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.079822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.079837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.079860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.079875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.079899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.079915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:16.302 [2024-07-15 21:44:47.079938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.302 [2024-07-15 21:44:47.079953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.079977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.079993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.080016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.080031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.080055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.080070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.080099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.080115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.080147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.080164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.080189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.080204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.080228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.080243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.080267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.080282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.080306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.080321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.080345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.080360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.080384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.080399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.080423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.080438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.080462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.080477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.080500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.080521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.080544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.080559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.080583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.080601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.080626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.080641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.080665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.080680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.080704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.080719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.080743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.080758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.080782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.080797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.080820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.080835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.080859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.080874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.080898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.080913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.081025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.081045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.081075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.081091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.081117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.081133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.081168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.081188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.081215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.081230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.081256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.081272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.081298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.081313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.081339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.081354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.081380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.081395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.081421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.081437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.081463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.081478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.081504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.081520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.081546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.303 [2024-07-15 21:44:47.081561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.081588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.303 [2024-07-15 21:44:47.081603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.081629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.303 [2024-07-15 21:44:47.081645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.081671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.303 [2024-07-15 21:44:47.081686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:16.303 [2024-07-15 21:44:47.081716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.303 [2024-07-15 21:44:47.081732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.081758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.304 [2024-07-15 21:44:47.081773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.081799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.304 [2024-07-15 21:44:47.081814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.081840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.304 [2024-07-15 21:44:47.081855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.081881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.081896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.081922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.081938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.081964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.081979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.082006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.082022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.082048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.082064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.082090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.082105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.082131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.082153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.082180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.082196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.082226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.082242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.082269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.082284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.082311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.082326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.082434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.082453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.082485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.082502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.082531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.082546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.082575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.082591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.082619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.082634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.082662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.082678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.082707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.082722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.082751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.082766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.082795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.082810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.082839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.082858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.082887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.082903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.082932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.082947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.082984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.083000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.083029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.083044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.083073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.083088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.083117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.083132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.083168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.083185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.083213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.083228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.083257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.083272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.083301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.083316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.083345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.083360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.083388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.083407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.083437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.083452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.083481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.083496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.083525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.083541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.083569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.083584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.083613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.304 [2024-07-15 21:44:47.083629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:16.304 [2024-07-15 21:44:47.083657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.305 [2024-07-15 21:44:47.083672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:44:47.083706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.305 [2024-07-15 21:44:47.083727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:44:47.083756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.305 [2024-07-15 21:44:47.083772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:44:47.083800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.305 [2024-07-15 21:44:47.083815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:44:47.083844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.305 [2024-07-15 21:44:47.083859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:44:47.084068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.305 [2024-07-15 21:44:47.084090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:44:47.084125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.305 [2024-07-15 21:44:47.084148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:44:47.084186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.305 [2024-07-15 21:44:47.084202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:44:47.084233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.305 [2024-07-15 21:44:47.084249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:44:47.084279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.305 [2024-07-15 21:44:47.084295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:44:47.084326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.305 [2024-07-15 21:44:47.084342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:44:47.084373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.305 [2024-07-15 21:44:47.084388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:44:47.084420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.305 [2024-07-15 21:44:47.084435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:44:47.084466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.305 [2024-07-15 21:44:47.084482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.828264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.305 [2024-07-15 21:45:03.828329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.828392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.305 [2024-07-15 21:45:03.828412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.828436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.305 [2024-07-15 21:45:03.828452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.828474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.305 [2024-07-15 21:45:03.828490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.828513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.305 [2024-07-15 21:45:03.828528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.828560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.305 [2024-07-15 21:45:03.828576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.828598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.305 [2024-07-15 21:45:03.828614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.828635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.305 [2024-07-15 21:45:03.828650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.828671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.305 [2024-07-15 21:45:03.828687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.828708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.305 [2024-07-15 21:45:03.828724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.828745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.305 [2024-07-15 21:45:03.828760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.828782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.305 [2024-07-15 21:45:03.828797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.828819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.305 [2024-07-15 21:45:03.828834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.828856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.305 [2024-07-15 21:45:03.828872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.829172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.305 [2024-07-15 21:45:03.829194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.829220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.305 [2024-07-15 21:45:03.829237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.829260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.305 [2024-07-15 21:45:03.829275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.829297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.305 [2024-07-15 21:45:03.829317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.829340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.305 [2024-07-15 21:45:03.829356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.829377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.305 [2024-07-15 21:45:03.829393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.829415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.305 [2024-07-15 21:45:03.829430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.833049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.305 [2024-07-15 21:45:03.833075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.833103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.305 [2024-07-15 21:45:03.833120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.833149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.305 [2024-07-15 21:45:03.833166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.833188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.305 [2024-07-15 21:45:03.833204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:16.305 [2024-07-15 21:45:03.833226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.305 [2024-07-15 21:45:03.833241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.306 [2024-07-15 21:45:03.833263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.306 [2024-07-15 21:45:03.833278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.306 [2024-07-15 21:45:03.833300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.306 [2024-07-15 21:45:03.833315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:16.306 [2024-07-15 21:45:03.833337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.306 [2024-07-15 21:45:03.833352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:16.306 [2024-07-15 21:45:03.833374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.306 [2024-07-15 21:45:03.833398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:16.306 [2024-07-15 21:45:03.833421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.306 [2024-07-15 21:45:03.833436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:16.306 [2024-07-15 21:45:03.833458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.306 [2024-07-15 21:45:03.833473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:16.306 [2024-07-15 21:45:03.833495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.306 [2024-07-15 21:45:03.833511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:16.306 [2024-07-15 21:45:03.833532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.306 [2024-07-15 21:45:03.833548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:16.306 [2024-07-15 21:45:03.833570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.306 [2024-07-15 21:45:03.833584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:16.306 [2024-07-15 21:45:03.833606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.306 [2024-07-15 21:45:03.833621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:16.306 [2024-07-15 21:45:03.833643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.306 [2024-07-15 21:45:03.833658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:16.306 [2024-07-15 21:45:03.833680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.306 [2024-07-15 21:45:03.833695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:16.306 [2024-07-15 21:45:03.833718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.306 [2024-07-15 21:45:03.833733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:16.306 Received shutdown signal, test time was about 34.907819 seconds 00:23:16.306 00:23:16.306 Latency(us) 00:23:16.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.306 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:16.306 Verification LBA range: start 0x0 length 0x4000 00:23:16.306 Nvme0n1 : 34.91 8571.39 33.48 0.00 0.00 14907.10 646.26 4026531.84 00:23:16.306 =================================================================================================================== 00:23:16.306 Total : 8571.39 33.48 0.00 0.00 14907.10 646.26 4026531.84 00:23:16.306 21:45:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:16.602 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:16.602 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:16.602 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:16.602 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:16.602 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:23:16.602 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:16.602 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:23:16.602 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:16.602 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:16.602 rmmod nvme_tcp 00:23:16.602 rmmod nvme_fabrics 00:23:16.602 rmmod nvme_keyring 00:23:16.602 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:16.602 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:23:16.602 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:23:16.602 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 411543 ']' 00:23:16.602 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 411543 00:23:16.602 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 411543 ']' 00:23:16.602 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 411543 00:23:16.602 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:23:16.602 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:16.602 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 411543 00:23:16.602 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:16.602 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:16.602 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 411543' 00:23:16.602 killing process with pid 411543 00:23:16.602 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 411543 00:23:16.602 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 411543 00:23:16.892 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:16.892 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:16.892 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:16.892 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:16.892 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:16.892 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.892 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.892 21:45:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.795 21:45:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:18.795 00:23:18.795 real 0m43.416s 00:23:18.795 user 2m14.871s 00:23:18.795 sys 0m9.983s 00:23:18.795 21:45:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:18.795 21:45:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:18.795 ************************************ 00:23:18.795 END TEST nvmf_host_multipath_status 00:23:18.795 ************************************ 00:23:18.795 21:45:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:18.795 21:45:09 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:18.795 21:45:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:18.795 21:45:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:18.795 21:45:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:18.795 ************************************ 00:23:18.795 START TEST nvmf_discovery_remove_ifc 00:23:18.795 ************************************ 00:23:18.795 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:19.075 * Looking for test storage... 00:23:19.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:23:19.075 21:45:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:23:20.452 Found 0000:08:00.0 (0x8086 - 0x159b) 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:23:20.452 Found 0000:08:00.1 (0x8086 - 0x159b) 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:23:20.452 Found net devices under 0000:08:00.0: cvl_0_0 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:23:20.452 Found net devices under 0000:08:00.1: cvl_0_1 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.452 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:20.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:23:20.710 00:23:20.710 --- 10.0.0.2 ping statistics --- 00:23:20.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.710 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:23:20.710 00:23:20.710 --- 10.0.0.1 ping statistics --- 00:23:20.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.710 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=417474 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 417474 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 417474 ']' 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:20.710 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:20.710 [2024-07-15 21:45:11.377499] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:23:20.710 [2024-07-15 21:45:11.377603] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.711 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.711 [2024-07-15 21:45:11.443299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.968 [2024-07-15 21:45:11.558851] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.968 [2024-07-15 21:45:11.558911] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.968 [2024-07-15 21:45:11.558927] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.968 [2024-07-15 21:45:11.558940] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.968 [2024-07-15 21:45:11.558952] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.968 [2024-07-15 21:45:11.558989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.968 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.968 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:23:20.968 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:20.968 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:20.968 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:20.968 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.968 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:20.968 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.968 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:20.968 [2024-07-15 21:45:11.693033] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.968 [2024-07-15 21:45:11.701227] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:20.968 null0 00:23:20.968 [2024-07-15 21:45:11.733163] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.968 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.968 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=417496 00:23:20.968 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:20.968 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 417496 /tmp/host.sock 00:23:20.968 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 417496 ']' 00:23:20.968 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:20.968 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:20.968 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:20.968 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:20.968 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:20.968 21:45:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:21.226 [2024-07-15 21:45:11.803694] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:23:21.226 [2024-07-15 21:45:11.803800] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid417496 ] 00:23:21.226 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.226 [2024-07-15 21:45:11.866881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.226 [2024-07-15 21:45:11.979676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.484 21:45:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:21.484 21:45:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:23:21.484 21:45:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:21.484 21:45:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:21.484 21:45:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.484 21:45:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:21.484 21:45:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.484 21:45:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:21.484 21:45:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.484 21:45:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:21.484 21:45:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.484 21:45:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:21.484 21:45:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.484 21:45:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:22.417 [2024-07-15 21:45:13.180883] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:22.417 [2024-07-15 21:45:13.180939] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:22.417 [2024-07-15 21:45:13.180963] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:22.674 [2024-07-15 21:45:13.308374] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:22.674 [2024-07-15 21:45:13.371414] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:22.674 [2024-07-15 21:45:13.371481] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:22.674 [2024-07-15 21:45:13.371517] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:22.674 [2024-07-15 21:45:13.371540] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:22.674 [2024-07-15 21:45:13.371573] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:22.674 21:45:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.674 21:45:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:22.674 21:45:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:22.674 21:45:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.674 21:45:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:22.675 21:45:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.675 21:45:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:22.675 21:45:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:22.675 21:45:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:22.675 [2024-07-15 21:45:13.378762] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x24cdd70 was disconnected and freed. delete nvme_qpair. 00:23:22.675 21:45:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.675 21:45:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:22.675 21:45:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:22.675 21:45:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:22.675 21:45:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:22.675 21:45:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:22.675 21:45:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.675 21:45:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:22.675 21:45:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.675 21:45:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:22.675 21:45:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:22.675 21:45:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:22.931 21:45:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.931 21:45:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:22.931 21:45:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:23.864 21:45:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:23.864 21:45:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:23.864 21:45:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:23.864 21:45:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.864 21:45:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:23.864 21:45:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:23.864 21:45:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:23.864 21:45:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.864 21:45:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:23.864 21:45:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:24.797 21:45:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:24.797 21:45:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.797 21:45:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:24.797 21:45:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.797 21:45:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:24.797 21:45:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:24.797 21:45:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:24.797 21:45:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.797 21:45:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:24.797 21:45:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:26.172 21:45:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:26.172 21:45:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.172 21:45:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:26.172 21:45:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.172 21:45:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:26.172 21:45:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:26.172 21:45:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:26.172 21:45:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.172 21:45:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:26.172 21:45:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:27.104 21:45:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:27.104 21:45:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.104 21:45:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:27.104 21:45:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.104 21:45:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.104 21:45:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:27.104 21:45:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:27.104 21:45:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.104 21:45:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:27.104 21:45:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:28.036 21:45:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:28.036 21:45:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.036 21:45:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:28.036 21:45:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.036 21:45:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:28.036 21:45:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:28.036 21:45:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:28.036 21:45:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.036 21:45:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:28.036 21:45:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:28.036 [2024-07-15 21:45:18.812976] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:28.036 [2024-07-15 21:45:18.813048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.036 [2024-07-15 21:45:18.813069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.036 [2024-07-15 21:45:18.813087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.036 [2024-07-15 21:45:18.813100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.036 [2024-07-15 21:45:18.813113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.036 [2024-07-15 21:45:18.813126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.036 [2024-07-15 21:45:18.813144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.036 [2024-07-15 21:45:18.813164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.036 [2024-07-15 21:45:18.813178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.036 [2024-07-15 21:45:18.813190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.036 [2024-07-15 21:45:18.813203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2494600 is same with the state(5) to be set 00:23:28.036 [2024-07-15 21:45:18.822984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2494600 (9): Bad file descriptor 00:23:28.293 [2024-07-15 21:45:18.833021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:29.225 21:45:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:29.225 21:45:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.225 21:45:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:29.225 21:45:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.225 21:45:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:29.225 21:45:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:29.225 21:45:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:29.225 [2024-07-15 21:45:19.846193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:29.225 [2024-07-15 21:45:19.846292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2494600 with addr=10.0.0.2, port=4420 00:23:29.225 [2024-07-15 21:45:19.846319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2494600 is same with the state(5) to be set 00:23:29.225 [2024-07-15 21:45:19.846371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2494600 (9): Bad file descriptor 00:23:29.225 [2024-07-15 21:45:19.846849] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:29.225 [2024-07-15 21:45:19.846894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:29.225 [2024-07-15 21:45:19.846910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:29.225 [2024-07-15 21:45:19.846926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:29.225 [2024-07-15 21:45:19.846958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.225 [2024-07-15 21:45:19.846974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:29.225 21:45:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.225 21:45:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:29.225 21:45:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:30.156 [2024-07-15 21:45:20.849476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:30.156 [2024-07-15 21:45:20.849523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:30.156 [2024-07-15 21:45:20.849538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:30.156 [2024-07-15 21:45:20.849553] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:23:30.156 [2024-07-15 21:45:20.849582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:30.156 [2024-07-15 21:45:20.849622] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:30.156 [2024-07-15 21:45:20.849672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.156 [2024-07-15 21:45:20.849694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.156 [2024-07-15 21:45:20.849713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.156 [2024-07-15 21:45:20.849726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.156 [2024-07-15 21:45:20.849739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.156 [2024-07-15 21:45:20.849751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.156 [2024-07-15 21:45:20.849764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.156 [2024-07-15 21:45:20.849777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.156 [2024-07-15 21:45:20.849790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.156 [2024-07-15 21:45:20.849802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.156 [2024-07-15 21:45:20.849815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:30.156 [2024-07-15 21:45:20.849865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2493a80 (9): Bad file descriptor 00:23:30.156 [2024-07-15 21:45:20.850857] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:30.156 [2024-07-15 21:45:20.850878] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:30.156 21:45:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:30.156 21:45:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.156 21:45:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:30.156 21:45:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.156 21:45:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:30.156 21:45:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:30.156 21:45:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:30.156 21:45:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.156 21:45:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:30.156 21:45:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:30.156 21:45:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:30.156 21:45:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:30.156 21:45:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:30.156 21:45:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.156 21:45:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:30.156 21:45:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.156 21:45:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:30.156 21:45:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:30.156 21:45:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:30.413 21:45:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.413 21:45:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:30.413 21:45:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:31.342 21:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:31.342 21:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:31.342 21:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:31.342 21:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:31.342 21:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.342 21:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:31.342 21:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:31.342 21:45:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.342 21:45:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:31.342 21:45:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:32.270 [2024-07-15 21:45:22.865228] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:32.270 [2024-07-15 21:45:22.865269] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:32.271 [2024-07-15 21:45:22.865292] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:32.271 [2024-07-15 21:45:22.951523] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:32.271 [2024-07-15 21:45:23.007859] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:32.271 [2024-07-15 21:45:23.007917] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:32.271 [2024-07-15 21:45:23.007949] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:32.271 [2024-07-15 21:45:23.007971] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:32.271 [2024-07-15 21:45:23.007984] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:32.271 [2024-07-15 21:45:23.013859] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x24aa8d0 was disconnected and freed. delete nvme_qpair. 00:23:32.271 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:32.271 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.271 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:32.271 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.271 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:32.271 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:32.271 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:32.271 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.528 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:32.528 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:32.528 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 417496 00:23:32.528 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 417496 ']' 00:23:32.528 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 417496 00:23:32.528 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:23:32.528 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:32.528 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 417496 00:23:32.528 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:32.528 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:32.528 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 417496' 00:23:32.528 killing process with pid 417496 00:23:32.528 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 417496 00:23:32.528 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 417496 00:23:32.528 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:32.528 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:32.528 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:23:32.528 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:32.528 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:23:32.528 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:32.528 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:32.528 rmmod nvme_tcp 00:23:32.786 rmmod nvme_fabrics 00:23:32.786 rmmod nvme_keyring 00:23:32.786 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:32.786 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:23:32.786 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:23:32.786 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 417474 ']' 00:23:32.786 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 417474 00:23:32.786 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 417474 ']' 00:23:32.786 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 417474 00:23:32.786 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:23:32.786 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:32.786 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 417474 00:23:32.786 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:32.786 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:32.786 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 417474' 00:23:32.786 killing process with pid 417474 00:23:32.786 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 417474 00:23:32.786 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 417474 00:23:32.786 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:32.786 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:32.786 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:32.786 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:32.786 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:32.786 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.786 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:32.786 21:45:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.325 21:45:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:35.325 00:23:35.325 real 0m16.054s 00:23:35.325 user 0m23.420s 00:23:35.325 sys 0m2.464s 00:23:35.325 21:45:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:35.325 21:45:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.325 ************************************ 00:23:35.325 END TEST nvmf_discovery_remove_ifc 00:23:35.325 ************************************ 00:23:35.325 21:45:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:35.325 21:45:25 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:35.325 21:45:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:35.325 21:45:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:35.325 21:45:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:35.325 ************************************ 00:23:35.325 START TEST nvmf_identify_kernel_target 00:23:35.325 ************************************ 00:23:35.325 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:35.325 * Looking for test storage... 00:23:35.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:35.325 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:35.325 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:35.325 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:35.325 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:35.325 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:35.325 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:35.325 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:35.325 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:35.325 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:23:35.326 21:45:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.707 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:36.707 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:23:36.707 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:36.707 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:36.707 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:36.707 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:36.707 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:23:36.708 Found 0000:08:00.0 (0x8086 - 0x159b) 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:23:36.708 Found 0000:08:00.1 (0x8086 - 0x159b) 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:23:36.708 Found net devices under 0000:08:00.0: cvl_0_0 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:23:36.708 Found net devices under 0000:08:00.1: cvl_0_1 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:36.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:36.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:23:36.708 00:23:36.708 --- 10.0.0.2 ping statistics --- 00:23:36.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.708 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:36.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:36.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:23:36.708 00:23:36.708 --- 10.0.0.1 ping statistics --- 00:23:36.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.708 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:36.708 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:36.968 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:36.968 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:36.968 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:23:36.968 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.968 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.968 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.968 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.968 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.968 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.968 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.968 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.968 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.968 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:36.968 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:36.968 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:36.968 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:36.968 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:36.968 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:36.968 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:36.968 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:23:36.968 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:36.968 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:36.968 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:36.968 21:45:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:37.907 Waiting for block devices as requested 00:23:37.907 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:23:37.907 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:23:38.167 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:23:38.167 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:23:38.167 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:23:38.167 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:23:38.426 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:23:38.426 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:23:38.426 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:23:38.684 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:23:38.684 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:23:38.684 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:23:38.684 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:23:38.944 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:23:38.944 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:23:38.944 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:23:39.237 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:39.237 No valid GPT data, bailing 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:23:39.237 00:23:39.237 Discovery Log Number of Records 2, Generation counter 2 00:23:39.237 =====Discovery Log Entry 0====== 00:23:39.237 trtype: tcp 00:23:39.237 adrfam: ipv4 00:23:39.237 subtype: current discovery subsystem 00:23:39.237 treq: not specified, sq flow control disable supported 00:23:39.237 portid: 1 00:23:39.237 trsvcid: 4420 00:23:39.237 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:39.237 traddr: 10.0.0.1 00:23:39.237 eflags: none 00:23:39.237 sectype: none 00:23:39.237 =====Discovery Log Entry 1====== 00:23:39.237 trtype: tcp 00:23:39.237 adrfam: ipv4 00:23:39.237 subtype: nvme subsystem 00:23:39.237 treq: not specified, sq flow control disable supported 00:23:39.237 portid: 1 00:23:39.237 trsvcid: 4420 00:23:39.237 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:39.237 traddr: 10.0.0.1 00:23:39.237 eflags: none 00:23:39.237 sectype: none 00:23:39.237 21:45:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:39.237 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:39.237 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.560 ===================================================== 00:23:39.561 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:39.561 ===================================================== 00:23:39.561 Controller Capabilities/Features 00:23:39.561 ================================ 00:23:39.561 Vendor ID: 0000 00:23:39.561 Subsystem Vendor ID: 0000 00:23:39.561 Serial Number: 4f84e579d50d9a5f855e 00:23:39.561 Model Number: Linux 00:23:39.561 Firmware Version: 6.7.0-68 00:23:39.561 Recommended Arb Burst: 0 00:23:39.561 IEEE OUI Identifier: 00 00 00 00:23:39.561 Multi-path I/O 00:23:39.561 May have multiple subsystem ports: No 00:23:39.561 May have multiple controllers: No 00:23:39.561 Associated with SR-IOV VF: No 00:23:39.561 Max Data Transfer Size: Unlimited 00:23:39.561 Max Number of Namespaces: 0 00:23:39.561 Max Number of I/O Queues: 1024 00:23:39.561 NVMe Specification Version (VS): 1.3 00:23:39.561 NVMe Specification Version (Identify): 1.3 00:23:39.561 Maximum Queue Entries: 1024 00:23:39.561 Contiguous Queues Required: No 00:23:39.561 Arbitration Mechanisms Supported 00:23:39.561 Weighted Round Robin: Not Supported 00:23:39.561 Vendor Specific: Not Supported 00:23:39.561 Reset Timeout: 7500 ms 00:23:39.561 Doorbell Stride: 4 bytes 00:23:39.561 NVM Subsystem Reset: Not Supported 00:23:39.561 Command Sets Supported 00:23:39.561 NVM Command Set: Supported 00:23:39.561 Boot Partition: Not Supported 00:23:39.561 Memory Page Size Minimum: 4096 bytes 00:23:39.561 Memory Page Size Maximum: 4096 bytes 00:23:39.561 Persistent Memory Region: Not Supported 00:23:39.561 Optional Asynchronous Events Supported 00:23:39.561 Namespace Attribute Notices: Not Supported 00:23:39.561 Firmware Activation Notices: Not Supported 00:23:39.561 ANA Change Notices: Not Supported 00:23:39.561 PLE Aggregate Log Change Notices: Not Supported 00:23:39.561 LBA Status Info Alert Notices: Not Supported 00:23:39.561 EGE Aggregate Log Change Notices: Not Supported 00:23:39.561 Normal NVM Subsystem Shutdown event: Not Supported 00:23:39.561 Zone Descriptor Change Notices: Not Supported 00:23:39.561 Discovery Log Change Notices: Supported 00:23:39.561 Controller Attributes 00:23:39.561 128-bit Host Identifier: Not Supported 00:23:39.561 Non-Operational Permissive Mode: Not Supported 00:23:39.561 NVM Sets: Not Supported 00:23:39.561 Read Recovery Levels: Not Supported 00:23:39.561 Endurance Groups: Not Supported 00:23:39.561 Predictable Latency Mode: Not Supported 00:23:39.561 Traffic Based Keep ALive: Not Supported 00:23:39.561 Namespace Granularity: Not Supported 00:23:39.561 SQ Associations: Not Supported 00:23:39.561 UUID List: Not Supported 00:23:39.561 Multi-Domain Subsystem: Not Supported 00:23:39.561 Fixed Capacity Management: Not Supported 00:23:39.561 Variable Capacity Management: Not Supported 00:23:39.561 Delete Endurance Group: Not Supported 00:23:39.561 Delete NVM Set: Not Supported 00:23:39.561 Extended LBA Formats Supported: Not Supported 00:23:39.561 Flexible Data Placement Supported: Not Supported 00:23:39.561 00:23:39.561 Controller Memory Buffer Support 00:23:39.561 ================================ 00:23:39.561 Supported: No 00:23:39.561 00:23:39.561 Persistent Memory Region Support 00:23:39.561 ================================ 00:23:39.561 Supported: No 00:23:39.561 00:23:39.561 Admin Command Set Attributes 00:23:39.561 ============================ 00:23:39.561 Security Send/Receive: Not Supported 00:23:39.561 Format NVM: Not Supported 00:23:39.561 Firmware Activate/Download: Not Supported 00:23:39.561 Namespace Management: Not Supported 00:23:39.561 Device Self-Test: Not Supported 00:23:39.561 Directives: Not Supported 00:23:39.561 NVMe-MI: Not Supported 00:23:39.561 Virtualization Management: Not Supported 00:23:39.561 Doorbell Buffer Config: Not Supported 00:23:39.561 Get LBA Status Capability: Not Supported 00:23:39.561 Command & Feature Lockdown Capability: Not Supported 00:23:39.561 Abort Command Limit: 1 00:23:39.561 Async Event Request Limit: 1 00:23:39.561 Number of Firmware Slots: N/A 00:23:39.561 Firmware Slot 1 Read-Only: N/A 00:23:39.561 Firmware Activation Without Reset: N/A 00:23:39.561 Multiple Update Detection Support: N/A 00:23:39.561 Firmware Update Granularity: No Information Provided 00:23:39.561 Per-Namespace SMART Log: No 00:23:39.561 Asymmetric Namespace Access Log Page: Not Supported 00:23:39.561 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:39.561 Command Effects Log Page: Not Supported 00:23:39.561 Get Log Page Extended Data: Supported 00:23:39.561 Telemetry Log Pages: Not Supported 00:23:39.561 Persistent Event Log Pages: Not Supported 00:23:39.561 Supported Log Pages Log Page: May Support 00:23:39.561 Commands Supported & Effects Log Page: Not Supported 00:23:39.561 Feature Identifiers & Effects Log Page:May Support 00:23:39.561 NVMe-MI Commands & Effects Log Page: May Support 00:23:39.561 Data Area 4 for Telemetry Log: Not Supported 00:23:39.561 Error Log Page Entries Supported: 1 00:23:39.561 Keep Alive: Not Supported 00:23:39.561 00:23:39.561 NVM Command Set Attributes 00:23:39.561 ========================== 00:23:39.561 Submission Queue Entry Size 00:23:39.561 Max: 1 00:23:39.561 Min: 1 00:23:39.561 Completion Queue Entry Size 00:23:39.561 Max: 1 00:23:39.561 Min: 1 00:23:39.561 Number of Namespaces: 0 00:23:39.561 Compare Command: Not Supported 00:23:39.561 Write Uncorrectable Command: Not Supported 00:23:39.561 Dataset Management Command: Not Supported 00:23:39.561 Write Zeroes Command: Not Supported 00:23:39.561 Set Features Save Field: Not Supported 00:23:39.561 Reservations: Not Supported 00:23:39.561 Timestamp: Not Supported 00:23:39.561 Copy: Not Supported 00:23:39.561 Volatile Write Cache: Not Present 00:23:39.561 Atomic Write Unit (Normal): 1 00:23:39.561 Atomic Write Unit (PFail): 1 00:23:39.561 Atomic Compare & Write Unit: 1 00:23:39.561 Fused Compare & Write: Not Supported 00:23:39.561 Scatter-Gather List 00:23:39.561 SGL Command Set: Supported 00:23:39.561 SGL Keyed: Not Supported 00:23:39.561 SGL Bit Bucket Descriptor: Not Supported 00:23:39.561 SGL Metadata Pointer: Not Supported 00:23:39.561 Oversized SGL: Not Supported 00:23:39.561 SGL Metadata Address: Not Supported 00:23:39.561 SGL Offset: Supported 00:23:39.561 Transport SGL Data Block: Not Supported 00:23:39.561 Replay Protected Memory Block: Not Supported 00:23:39.561 00:23:39.561 Firmware Slot Information 00:23:39.561 ========================= 00:23:39.561 Active slot: 0 00:23:39.561 00:23:39.561 00:23:39.561 Error Log 00:23:39.561 ========= 00:23:39.561 00:23:39.561 Active Namespaces 00:23:39.561 ================= 00:23:39.561 Discovery Log Page 00:23:39.561 ================== 00:23:39.561 Generation Counter: 2 00:23:39.561 Number of Records: 2 00:23:39.561 Record Format: 0 00:23:39.561 00:23:39.561 Discovery Log Entry 0 00:23:39.561 ---------------------- 00:23:39.561 Transport Type: 3 (TCP) 00:23:39.561 Address Family: 1 (IPv4) 00:23:39.561 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:39.561 Entry Flags: 00:23:39.561 Duplicate Returned Information: 0 00:23:39.561 Explicit Persistent Connection Support for Discovery: 0 00:23:39.561 Transport Requirements: 00:23:39.561 Secure Channel: Not Specified 00:23:39.561 Port ID: 1 (0x0001) 00:23:39.561 Controller ID: 65535 (0xffff) 00:23:39.561 Admin Max SQ Size: 32 00:23:39.561 Transport Service Identifier: 4420 00:23:39.561 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:39.561 Transport Address: 10.0.0.1 00:23:39.561 Discovery Log Entry 1 00:23:39.561 ---------------------- 00:23:39.561 Transport Type: 3 (TCP) 00:23:39.561 Address Family: 1 (IPv4) 00:23:39.561 Subsystem Type: 2 (NVM Subsystem) 00:23:39.561 Entry Flags: 00:23:39.561 Duplicate Returned Information: 0 00:23:39.561 Explicit Persistent Connection Support for Discovery: 0 00:23:39.561 Transport Requirements: 00:23:39.561 Secure Channel: Not Specified 00:23:39.561 Port ID: 1 (0x0001) 00:23:39.561 Controller ID: 65535 (0xffff) 00:23:39.561 Admin Max SQ Size: 32 00:23:39.561 Transport Service Identifier: 4420 00:23:39.561 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:39.561 Transport Address: 10.0.0.1 00:23:39.561 21:45:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:39.561 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.561 get_feature(0x01) failed 00:23:39.561 get_feature(0x02) failed 00:23:39.561 get_feature(0x04) failed 00:23:39.561 ===================================================== 00:23:39.561 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:39.561 ===================================================== 00:23:39.561 Controller Capabilities/Features 00:23:39.561 ================================ 00:23:39.561 Vendor ID: 0000 00:23:39.561 Subsystem Vendor ID: 0000 00:23:39.561 Serial Number: 03182526b78f661a8dc7 00:23:39.561 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:39.561 Firmware Version: 6.7.0-68 00:23:39.561 Recommended Arb Burst: 6 00:23:39.561 IEEE OUI Identifier: 00 00 00 00:23:39.561 Multi-path I/O 00:23:39.561 May have multiple subsystem ports: Yes 00:23:39.561 May have multiple controllers: Yes 00:23:39.561 Associated with SR-IOV VF: No 00:23:39.561 Max Data Transfer Size: Unlimited 00:23:39.561 Max Number of Namespaces: 1024 00:23:39.561 Max Number of I/O Queues: 128 00:23:39.561 NVMe Specification Version (VS): 1.3 00:23:39.561 NVMe Specification Version (Identify): 1.3 00:23:39.561 Maximum Queue Entries: 1024 00:23:39.561 Contiguous Queues Required: No 00:23:39.561 Arbitration Mechanisms Supported 00:23:39.561 Weighted Round Robin: Not Supported 00:23:39.561 Vendor Specific: Not Supported 00:23:39.561 Reset Timeout: 7500 ms 00:23:39.561 Doorbell Stride: 4 bytes 00:23:39.561 NVM Subsystem Reset: Not Supported 00:23:39.561 Command Sets Supported 00:23:39.561 NVM Command Set: Supported 00:23:39.561 Boot Partition: Not Supported 00:23:39.561 Memory Page Size Minimum: 4096 bytes 00:23:39.561 Memory Page Size Maximum: 4096 bytes 00:23:39.561 Persistent Memory Region: Not Supported 00:23:39.561 Optional Asynchronous Events Supported 00:23:39.561 Namespace Attribute Notices: Supported 00:23:39.561 Firmware Activation Notices: Not Supported 00:23:39.561 ANA Change Notices: Supported 00:23:39.561 PLE Aggregate Log Change Notices: Not Supported 00:23:39.561 LBA Status Info Alert Notices: Not Supported 00:23:39.561 EGE Aggregate Log Change Notices: Not Supported 00:23:39.561 Normal NVM Subsystem Shutdown event: Not Supported 00:23:39.561 Zone Descriptor Change Notices: Not Supported 00:23:39.561 Discovery Log Change Notices: Not Supported 00:23:39.561 Controller Attributes 00:23:39.561 128-bit Host Identifier: Supported 00:23:39.561 Non-Operational Permissive Mode: Not Supported 00:23:39.561 NVM Sets: Not Supported 00:23:39.561 Read Recovery Levels: Not Supported 00:23:39.561 Endurance Groups: Not Supported 00:23:39.561 Predictable Latency Mode: Not Supported 00:23:39.561 Traffic Based Keep ALive: Supported 00:23:39.561 Namespace Granularity: Not Supported 00:23:39.561 SQ Associations: Not Supported 00:23:39.561 UUID List: Not Supported 00:23:39.561 Multi-Domain Subsystem: Not Supported 00:23:39.561 Fixed Capacity Management: Not Supported 00:23:39.561 Variable Capacity Management: Not Supported 00:23:39.561 Delete Endurance Group: Not Supported 00:23:39.561 Delete NVM Set: Not Supported 00:23:39.561 Extended LBA Formats Supported: Not Supported 00:23:39.561 Flexible Data Placement Supported: Not Supported 00:23:39.561 00:23:39.561 Controller Memory Buffer Support 00:23:39.561 ================================ 00:23:39.561 Supported: No 00:23:39.561 00:23:39.561 Persistent Memory Region Support 00:23:39.561 ================================ 00:23:39.561 Supported: No 00:23:39.561 00:23:39.562 Admin Command Set Attributes 00:23:39.562 ============================ 00:23:39.562 Security Send/Receive: Not Supported 00:23:39.562 Format NVM: Not Supported 00:23:39.562 Firmware Activate/Download: Not Supported 00:23:39.562 Namespace Management: Not Supported 00:23:39.562 Device Self-Test: Not Supported 00:23:39.562 Directives: Not Supported 00:23:39.562 NVMe-MI: Not Supported 00:23:39.562 Virtualization Management: Not Supported 00:23:39.562 Doorbell Buffer Config: Not Supported 00:23:39.562 Get LBA Status Capability: Not Supported 00:23:39.562 Command & Feature Lockdown Capability: Not Supported 00:23:39.562 Abort Command Limit: 4 00:23:39.562 Async Event Request Limit: 4 00:23:39.562 Number of Firmware Slots: N/A 00:23:39.562 Firmware Slot 1 Read-Only: N/A 00:23:39.562 Firmware Activation Without Reset: N/A 00:23:39.562 Multiple Update Detection Support: N/A 00:23:39.562 Firmware Update Granularity: No Information Provided 00:23:39.562 Per-Namespace SMART Log: Yes 00:23:39.562 Asymmetric Namespace Access Log Page: Supported 00:23:39.562 ANA Transition Time : 10 sec 00:23:39.562 00:23:39.562 Asymmetric Namespace Access Capabilities 00:23:39.562 ANA Optimized State : Supported 00:23:39.562 ANA Non-Optimized State : Supported 00:23:39.562 ANA Inaccessible State : Supported 00:23:39.562 ANA Persistent Loss State : Supported 00:23:39.562 ANA Change State : Supported 00:23:39.562 ANAGRPID is not changed : No 00:23:39.562 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:39.562 00:23:39.562 ANA Group Identifier Maximum : 128 00:23:39.562 Number of ANA Group Identifiers : 128 00:23:39.562 Max Number of Allowed Namespaces : 1024 00:23:39.562 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:39.562 Command Effects Log Page: Supported 00:23:39.562 Get Log Page Extended Data: Supported 00:23:39.562 Telemetry Log Pages: Not Supported 00:23:39.562 Persistent Event Log Pages: Not Supported 00:23:39.562 Supported Log Pages Log Page: May Support 00:23:39.562 Commands Supported & Effects Log Page: Not Supported 00:23:39.562 Feature Identifiers & Effects Log Page:May Support 00:23:39.562 NVMe-MI Commands & Effects Log Page: May Support 00:23:39.562 Data Area 4 for Telemetry Log: Not Supported 00:23:39.562 Error Log Page Entries Supported: 128 00:23:39.562 Keep Alive: Supported 00:23:39.562 Keep Alive Granularity: 1000 ms 00:23:39.562 00:23:39.562 NVM Command Set Attributes 00:23:39.562 ========================== 00:23:39.562 Submission Queue Entry Size 00:23:39.562 Max: 64 00:23:39.562 Min: 64 00:23:39.562 Completion Queue Entry Size 00:23:39.562 Max: 16 00:23:39.562 Min: 16 00:23:39.562 Number of Namespaces: 1024 00:23:39.562 Compare Command: Not Supported 00:23:39.562 Write Uncorrectable Command: Not Supported 00:23:39.562 Dataset Management Command: Supported 00:23:39.562 Write Zeroes Command: Supported 00:23:39.562 Set Features Save Field: Not Supported 00:23:39.562 Reservations: Not Supported 00:23:39.562 Timestamp: Not Supported 00:23:39.562 Copy: Not Supported 00:23:39.562 Volatile Write Cache: Present 00:23:39.562 Atomic Write Unit (Normal): 1 00:23:39.562 Atomic Write Unit (PFail): 1 00:23:39.562 Atomic Compare & Write Unit: 1 00:23:39.562 Fused Compare & Write: Not Supported 00:23:39.562 Scatter-Gather List 00:23:39.562 SGL Command Set: Supported 00:23:39.562 SGL Keyed: Not Supported 00:23:39.562 SGL Bit Bucket Descriptor: Not Supported 00:23:39.562 SGL Metadata Pointer: Not Supported 00:23:39.562 Oversized SGL: Not Supported 00:23:39.562 SGL Metadata Address: Not Supported 00:23:39.562 SGL Offset: Supported 00:23:39.562 Transport SGL Data Block: Not Supported 00:23:39.562 Replay Protected Memory Block: Not Supported 00:23:39.562 00:23:39.562 Firmware Slot Information 00:23:39.562 ========================= 00:23:39.562 Active slot: 0 00:23:39.562 00:23:39.562 Asymmetric Namespace Access 00:23:39.562 =========================== 00:23:39.562 Change Count : 0 00:23:39.562 Number of ANA Group Descriptors : 1 00:23:39.562 ANA Group Descriptor : 0 00:23:39.562 ANA Group ID : 1 00:23:39.562 Number of NSID Values : 1 00:23:39.562 Change Count : 0 00:23:39.562 ANA State : 1 00:23:39.562 Namespace Identifier : 1 00:23:39.562 00:23:39.562 Commands Supported and Effects 00:23:39.562 ============================== 00:23:39.562 Admin Commands 00:23:39.562 -------------- 00:23:39.562 Get Log Page (02h): Supported 00:23:39.562 Identify (06h): Supported 00:23:39.562 Abort (08h): Supported 00:23:39.562 Set Features (09h): Supported 00:23:39.562 Get Features (0Ah): Supported 00:23:39.562 Asynchronous Event Request (0Ch): Supported 00:23:39.562 Keep Alive (18h): Supported 00:23:39.562 I/O Commands 00:23:39.562 ------------ 00:23:39.562 Flush (00h): Supported 00:23:39.562 Write (01h): Supported LBA-Change 00:23:39.562 Read (02h): Supported 00:23:39.562 Write Zeroes (08h): Supported LBA-Change 00:23:39.562 Dataset Management (09h): Supported 00:23:39.562 00:23:39.562 Error Log 00:23:39.562 ========= 00:23:39.562 Entry: 0 00:23:39.562 Error Count: 0x3 00:23:39.562 Submission Queue Id: 0x0 00:23:39.562 Command Id: 0x5 00:23:39.562 Phase Bit: 0 00:23:39.562 Status Code: 0x2 00:23:39.562 Status Code Type: 0x0 00:23:39.562 Do Not Retry: 1 00:23:39.562 Error Location: 0x28 00:23:39.562 LBA: 0x0 00:23:39.562 Namespace: 0x0 00:23:39.562 Vendor Log Page: 0x0 00:23:39.562 ----------- 00:23:39.562 Entry: 1 00:23:39.562 Error Count: 0x2 00:23:39.562 Submission Queue Id: 0x0 00:23:39.562 Command Id: 0x5 00:23:39.562 Phase Bit: 0 00:23:39.562 Status Code: 0x2 00:23:39.562 Status Code Type: 0x0 00:23:39.562 Do Not Retry: 1 00:23:39.562 Error Location: 0x28 00:23:39.562 LBA: 0x0 00:23:39.562 Namespace: 0x0 00:23:39.562 Vendor Log Page: 0x0 00:23:39.562 ----------- 00:23:39.562 Entry: 2 00:23:39.562 Error Count: 0x1 00:23:39.562 Submission Queue Id: 0x0 00:23:39.562 Command Id: 0x4 00:23:39.562 Phase Bit: 0 00:23:39.562 Status Code: 0x2 00:23:39.562 Status Code Type: 0x0 00:23:39.562 Do Not Retry: 1 00:23:39.562 Error Location: 0x28 00:23:39.562 LBA: 0x0 00:23:39.562 Namespace: 0x0 00:23:39.562 Vendor Log Page: 0x0 00:23:39.562 00:23:39.562 Number of Queues 00:23:39.562 ================ 00:23:39.562 Number of I/O Submission Queues: 128 00:23:39.562 Number of I/O Completion Queues: 128 00:23:39.562 00:23:39.562 ZNS Specific Controller Data 00:23:39.562 ============================ 00:23:39.562 Zone Append Size Limit: 0 00:23:39.562 00:23:39.562 00:23:39.562 Active Namespaces 00:23:39.562 ================= 00:23:39.562 get_feature(0x05) failed 00:23:39.562 Namespace ID:1 00:23:39.562 Command Set Identifier: NVM (00h) 00:23:39.562 Deallocate: Supported 00:23:39.562 Deallocated/Unwritten Error: Not Supported 00:23:39.562 Deallocated Read Value: Unknown 00:23:39.562 Deallocate in Write Zeroes: Not Supported 00:23:39.562 Deallocated Guard Field: 0xFFFF 00:23:39.562 Flush: Supported 00:23:39.562 Reservation: Not Supported 00:23:39.562 Namespace Sharing Capabilities: Multiple Controllers 00:23:39.562 Size (in LBAs): 1953525168 (931GiB) 00:23:39.562 Capacity (in LBAs): 1953525168 (931GiB) 00:23:39.562 Utilization (in LBAs): 1953525168 (931GiB) 00:23:39.562 UUID: 5ed51b31-ad91-48e6-85c7-826ad9c4ce7c 00:23:39.562 Thin Provisioning: Not Supported 00:23:39.562 Per-NS Atomic Units: Yes 00:23:39.562 Atomic Boundary Size (Normal): 0 00:23:39.562 Atomic Boundary Size (PFail): 0 00:23:39.562 Atomic Boundary Offset: 0 00:23:39.562 NGUID/EUI64 Never Reused: No 00:23:39.562 ANA group ID: 1 00:23:39.562 Namespace Write Protected: No 00:23:39.562 Number of LBA Formats: 1 00:23:39.562 Current LBA Format: LBA Format #00 00:23:39.562 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:39.562 00:23:39.562 21:45:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:39.562 21:45:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:39.562 21:45:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:23:39.562 21:45:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:39.562 21:45:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:23:39.562 21:45:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:39.562 21:45:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:39.562 rmmod nvme_tcp 00:23:39.562 rmmod nvme_fabrics 00:23:39.562 21:45:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:39.562 21:45:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:23:39.562 21:45:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:23:39.562 21:45:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:39.562 21:45:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:39.562 21:45:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:39.562 21:45:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:39.562 21:45:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:39.562 21:45:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:39.562 21:45:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.562 21:45:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.562 21:45:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.102 21:45:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:42.102 21:45:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:42.102 21:45:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:42.102 21:45:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:42.102 21:45:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:42.102 21:45:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:42.102 21:45:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:42.102 21:45:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:42.102 21:45:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:42.102 21:45:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:42.102 21:45:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:42.670 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:23:42.670 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:23:42.670 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:23:42.670 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:23:42.670 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:23:42.670 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:23:42.670 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:23:42.670 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:23:42.670 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:23:42.670 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:23:42.670 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:23:42.670 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:23:42.670 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:23:42.670 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:23:42.670 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:23:42.670 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:23:43.607 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:23:43.607 00:23:43.607 real 0m8.681s 00:23:43.607 user 0m1.721s 00:23:43.607 sys 0m3.017s 00:23:43.607 21:45:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:43.607 21:45:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.607 ************************************ 00:23:43.607 END TEST nvmf_identify_kernel_target 00:23:43.607 ************************************ 00:23:43.607 21:45:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:43.607 21:45:34 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:43.607 21:45:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:43.607 21:45:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:43.607 21:45:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:43.607 ************************************ 00:23:43.607 START TEST nvmf_auth_host 00:23:43.607 ************************************ 00:23:43.607 21:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:43.865 * Looking for test storage... 00:23:43.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:43.865 21:45:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:23:45.769 Found 0000:08:00.0 (0x8086 - 0x159b) 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.769 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:23:45.770 Found 0000:08:00.1 (0x8086 - 0x159b) 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:23:45.770 Found net devices under 0000:08:00.0: cvl_0_0 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:23:45.770 Found net devices under 0000:08:00.1: cvl_0_1 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:45.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:45.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:23:45.770 00:23:45.770 --- 10.0.0.2 ping statistics --- 00:23:45.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.770 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:45.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:45.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:23:45.770 00:23:45.770 --- 10.0.0.1 ping statistics --- 00:23:45.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.770 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=422813 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 422813 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 422813 ']' 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:45.770 21:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b0973c66f8c24c3123b4fd104a2d0932 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.k7X 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b0973c66f8c24c3123b4fd104a2d0932 0 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b0973c66f8c24c3123b4fd104a2d0932 0 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b0973c66f8c24c3123b4fd104a2d0932 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.k7X 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.k7X 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.k7X 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=16de30eaef4c8cceae4513a2f1ea6e8f2205cefa94bdb979fa0104ef0c3e3641 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.GCf 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 16de30eaef4c8cceae4513a2f1ea6e8f2205cefa94bdb979fa0104ef0c3e3641 3 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 16de30eaef4c8cceae4513a2f1ea6e8f2205cefa94bdb979fa0104ef0c3e3641 3 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=16de30eaef4c8cceae4513a2f1ea6e8f2205cefa94bdb979fa0104ef0c3e3641 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.GCf 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.GCf 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.GCf 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8125989b042eb9216806279de35c0747cf99ba3d169c9758 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.TXA 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8125989b042eb9216806279de35c0747cf99ba3d169c9758 0 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8125989b042eb9216806279de35c0747cf99ba3d169c9758 0 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8125989b042eb9216806279de35c0747cf99ba3d169c9758 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:46.029 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.TXA 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.TXA 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.TXA 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f50e3c5fe33fbb2f7e9f7d5813de2dddae990db9dc2e081d 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.iNb 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f50e3c5fe33fbb2f7e9f7d5813de2dddae990db9dc2e081d 2 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f50e3c5fe33fbb2f7e9f7d5813de2dddae990db9dc2e081d 2 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f50e3c5fe33fbb2f7e9f7d5813de2dddae990db9dc2e081d 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.iNb 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.iNb 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.iNb 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e43f1daf8e9dd7647e2b1714ac04b989 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.LuF 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e43f1daf8e9dd7647e2b1714ac04b989 1 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e43f1daf8e9dd7647e2b1714ac04b989 1 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e43f1daf8e9dd7647e2b1714ac04b989 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.LuF 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.LuF 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.LuF 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4d0eeb3d7ec7a6b8ecd6ae3a69543363 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.SpP 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4d0eeb3d7ec7a6b8ecd6ae3a69543363 1 00:23:46.287 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4d0eeb3d7ec7a6b8ecd6ae3a69543363 1 00:23:46.288 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:46.288 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:46.288 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4d0eeb3d7ec7a6b8ecd6ae3a69543363 00:23:46.288 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:46.288 21:45:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:46.288 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.SpP 00:23:46.288 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.SpP 00:23:46.288 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.SpP 00:23:46.288 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:46.288 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:46.288 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:46.288 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:46.288 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:46.288 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:46.288 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:46.288 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3ca8ad3ed7bed0affaff8b5a272fdc075721fc47a3cfad6f 00:23:46.288 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:46.288 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.wJQ 00:23:46.288 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3ca8ad3ed7bed0affaff8b5a272fdc075721fc47a3cfad6f 2 00:23:46.288 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3ca8ad3ed7bed0affaff8b5a272fdc075721fc47a3cfad6f 2 00:23:46.288 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:46.288 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:46.288 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3ca8ad3ed7bed0affaff8b5a272fdc075721fc47a3cfad6f 00:23:46.288 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:46.288 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.wJQ 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.wJQ 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.wJQ 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=33c8c514b2ce83e59d118ac214ddddae 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.55w 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 33c8c514b2ce83e59d118ac214ddddae 0 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 33c8c514b2ce83e59d118ac214ddddae 0 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=33c8c514b2ce83e59d118ac214ddddae 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.55w 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.55w 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.55w 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=56a878cb866eb26b0df393e55541cc849f1a00afd9f632cd3486fa63aa2e6077 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.eQ0 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 56a878cb866eb26b0df393e55541cc849f1a00afd9f632cd3486fa63aa2e6077 3 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 56a878cb866eb26b0df393e55541cc849f1a00afd9f632cd3486fa63aa2e6077 3 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=56a878cb866eb26b0df393e55541cc849f1a00afd9f632cd3486fa63aa2e6077 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.eQ0 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.eQ0 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.eQ0 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 422813 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 422813 ']' 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:46.546 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.k7X 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.GCf ]] 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GCf 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.TXA 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.iNb ]] 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iNb 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.LuF 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.SpP ]] 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SpP 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.wJQ 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.55w ]] 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.55w 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.803 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.eQ0 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:46.804 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:47.060 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:47.060 21:45:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:47.624 Waiting for block devices as requested 00:23:47.881 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:23:47.881 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:23:47.881 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:23:48.138 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:23:48.138 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:23:48.138 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:23:48.138 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:23:48.395 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:23:48.395 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:23:48.395 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:23:48.395 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:23:48.652 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:23:48.652 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:23:48.652 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:23:48.652 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:23:48.652 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:23:48.909 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:49.167 No valid GPT data, bailing 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:49.167 21:45:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:23:49.425 00:23:49.425 Discovery Log Number of Records 2, Generation counter 2 00:23:49.425 =====Discovery Log Entry 0====== 00:23:49.425 trtype: tcp 00:23:49.425 adrfam: ipv4 00:23:49.425 subtype: current discovery subsystem 00:23:49.425 treq: not specified, sq flow control disable supported 00:23:49.425 portid: 1 00:23:49.425 trsvcid: 4420 00:23:49.425 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:49.425 traddr: 10.0.0.1 00:23:49.425 eflags: none 00:23:49.425 sectype: none 00:23:49.425 =====Discovery Log Entry 1====== 00:23:49.425 trtype: tcp 00:23:49.425 adrfam: ipv4 00:23:49.425 subtype: nvme subsystem 00:23:49.425 treq: not specified, sq flow control disable supported 00:23:49.425 portid: 1 00:23:49.425 trsvcid: 4420 00:23:49.425 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:49.425 traddr: 10.0.0.1 00:23:49.425 eflags: none 00:23:49.425 sectype: none 00:23:49.425 21:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:49.425 21:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:49.425 21:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:49.425 21:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:49.425 21:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.425 21:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:49.425 21:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.425 21:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:49.425 21:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:23:49.425 21:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:23:49.425 21:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:49.425 21:45:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: ]] 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.425 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.682 nvme0n1 00:23:49.682 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.682 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.682 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.682 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.682 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.682 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.682 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.682 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.682 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.682 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.682 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.682 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:49.682 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:49.682 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: ]] 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.683 nvme0n1 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.683 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: ]] 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.940 nvme0n1 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.940 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.941 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.941 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.941 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.941 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: ]] 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.197 nvme0n1 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: ]] 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:50.197 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.198 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:50.198 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.198 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.198 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.198 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.198 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.198 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.198 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.198 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.198 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.198 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.198 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.198 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.198 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.198 21:45:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.198 21:45:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:50.198 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.198 21:45:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.454 nvme0n1 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.454 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.711 nvme0n1 00:23:50.711 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.711 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.711 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.711 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.711 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.711 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.711 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.711 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.711 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.711 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.711 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.711 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:50.711 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.711 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:50.711 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.711 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:50.711 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:50.711 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:50.711 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:23:50.711 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:23:50.711 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:50.711 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: ]] 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.968 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.224 nvme0n1 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: ]] 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.224 21:45:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.480 nvme0n1 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: ]] 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.480 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.481 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.481 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.481 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.481 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.481 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.481 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.481 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.481 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.481 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:51.481 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.481 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.737 nvme0n1 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: ]] 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.737 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.993 nvme0n1 00:23:51.993 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.993 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.993 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.993 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.993 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.993 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.993 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.993 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.993 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.993 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.993 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.993 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.993 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:51.993 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.993 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:51.993 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:51.993 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:51.993 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.994 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.250 nvme0n1 00:23:52.250 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.250 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.250 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.250 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.250 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.250 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.250 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.250 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.250 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.250 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.250 21:45:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.250 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:52.250 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.250 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:52.250 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.250 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:52.250 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:52.250 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:52.250 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:23:52.250 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:23:52.250 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:52.250 21:45:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:52.812 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:23:52.812 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: ]] 00:23:52.812 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:23:52.812 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:52.812 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.812 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:52.812 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:52.812 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:52.812 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.812 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:52.812 21:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.812 21:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.812 21:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.812 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.812 21:45:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.812 21:45:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.812 21:45:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.812 21:45:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.812 21:45:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.812 21:45:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.812 21:45:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.812 21:45:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.812 21:45:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.068 21:45:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.068 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:53.068 21:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.068 21:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.324 nvme0n1 00:23:53.324 21:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.324 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.324 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.324 21:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.324 21:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.324 21:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.324 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.324 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.324 21:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.324 21:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.324 21:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.324 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: ]] 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.325 21:45:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.582 nvme0n1 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: ]] 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.582 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.840 nvme0n1 00:23:53.840 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.840 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.840 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.840 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.840 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.840 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: ]] 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.097 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.354 nvme0n1 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.354 21:45:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.354 21:45:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.354 21:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.354 21:45:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:54.354 21:45:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:54.354 21:45:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:54.354 21:45:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.354 21:45:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.354 21:45:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:54.354 21:45:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.354 21:45:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:54.354 21:45:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:54.354 21:45:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:54.354 21:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:54.354 21:45:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.354 21:45:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.611 nvme0n1 00:23:54.611 21:45:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.611 21:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.611 21:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.611 21:45:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.611 21:45:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.612 21:45:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.612 21:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.612 21:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.612 21:45:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.612 21:45:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.612 21:45:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.612 21:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:54.612 21:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.612 21:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:54.612 21:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.612 21:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:54.612 21:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:54.612 21:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:54.612 21:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:23:54.612 21:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:23:54.612 21:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:54.612 21:45:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: ]] 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.138 nvme0n1 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: ]] 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.138 21:45:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.396 21:45:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.396 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.396 21:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:57.396 21:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:57.396 21:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:57.396 21:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.396 21:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.396 21:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:57.396 21:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.396 21:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:57.396 21:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:57.396 21:45:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:57.396 21:45:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:57.396 21:45:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.396 21:45:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.962 nvme0n1 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: ]] 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.962 21:45:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.527 nvme0n1 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: ]] 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.527 21:45:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.095 nvme0n1 00:23:59.095 21:45:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.095 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.096 21:45:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.662 nvme0n1 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: ]] 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:59.662 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.663 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:59.663 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:59.663 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:59.663 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.663 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:59.663 21:45:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.663 21:45:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.663 21:45:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.663 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.663 21:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.663 21:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.663 21:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.663 21:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.663 21:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.663 21:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.663 21:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.663 21:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.663 21:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.663 21:45:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.663 21:45:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:59.663 21:45:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.663 21:45:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.630 nvme0n1 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: ]] 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.630 21:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:00.631 21:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.631 21:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:00.631 21:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:00.631 21:45:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:00.631 21:45:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:00.631 21:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.631 21:45:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.562 nvme0n1 00:24:01.562 21:45:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.562 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.562 21:45:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.562 21:45:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.562 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.562 21:45:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.562 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.562 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: ]] 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.563 21:45:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.496 nvme0n1 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: ]] 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.496 21:45:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.426 nvme0n1 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.426 21:45:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.358 nvme0n1 00:24:04.358 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.358 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.358 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.358 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.358 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.358 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.358 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.358 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.358 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.358 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: ]] 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:04.616 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.617 nvme0n1 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: ]] 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.617 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.875 nvme0n1 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: ]] 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.875 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.134 nvme0n1 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: ]] 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.134 21:45:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.392 nvme0n1 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.392 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.650 nvme0n1 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: ]] 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.650 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.908 nvme0n1 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: ]] 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:05.908 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:05.909 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:05.909 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.909 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.909 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:05.909 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.909 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:05.909 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:05.909 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:05.909 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:05.909 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.909 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.166 nvme0n1 00:24:06.166 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.166 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.166 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.166 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.166 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: ]] 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.167 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.425 nvme0n1 00:24:06.425 21:45:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.425 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.425 21:45:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: ]] 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.425 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.683 nvme0n1 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.683 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.941 nvme0n1 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: ]] 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.941 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.200 nvme0n1 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: ]] 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.200 21:45:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.458 nvme0n1 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: ]] 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:07.458 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.717 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:07.717 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:07.717 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:07.717 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.717 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:07.717 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.717 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.717 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.717 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.717 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:07.717 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:07.717 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:07.717 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.717 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.717 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:07.717 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.717 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:07.717 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:07.717 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:07.717 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:07.717 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.717 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.975 nvme0n1 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: ]] 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.975 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.233 nvme0n1 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.233 21:45:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.490 nvme0n1 00:24:08.491 21:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.491 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.491 21:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.491 21:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.491 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.491 21:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.748 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.748 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.748 21:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.748 21:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.748 21:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.748 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:08.748 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.748 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:08.748 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.748 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:08.748 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:08.748 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:08.748 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:24:08.748 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: ]] 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.749 21:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.315 nvme0n1 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: ]] 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.315 21:45:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.878 nvme0n1 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: ]] 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:09.878 21:46:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.879 21:46:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.879 21:46:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:09.879 21:46:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.879 21:46:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:09.879 21:46:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:09.879 21:46:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:09.879 21:46:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:09.879 21:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.879 21:46:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.444 nvme0n1 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: ]] 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.444 21:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.009 nvme0n1 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.009 21:46:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.573 nvme0n1 00:24:11.573 21:46:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.573 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.573 21:46:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.573 21:46:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.573 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.573 21:46:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.573 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.573 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.573 21:46:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.573 21:46:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.573 21:46:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.573 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:11.573 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.573 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:11.573 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.573 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:11.573 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:11.573 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:11.573 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:24:11.573 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:24:11.573 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:11.573 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:11.573 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: ]] 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.574 21:46:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.505 nvme0n1 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: ]] 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.505 21:46:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.434 nvme0n1 00:24:13.434 21:46:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.434 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.434 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.434 21:46:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.434 21:46:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.434 21:46:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.434 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.434 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.434 21:46:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.434 21:46:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: ]] 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.691 21:46:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.621 nvme0n1 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: ]] 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.621 21:46:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.552 nvme0n1 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.553 21:46:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.484 nvme0n1 00:24:16.484 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.484 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.484 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.484 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.484 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: ]] 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.485 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.743 nvme0n1 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: ]] 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.743 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.001 nvme0n1 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: ]] 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.001 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.259 nvme0n1 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: ]] 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:17.259 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.260 21:46:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.517 nvme0n1 00:24:17.517 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.517 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.517 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.517 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.517 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.517 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.517 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.517 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.517 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.517 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.517 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.517 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.517 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:17.517 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.517 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:17.517 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:17.517 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.518 nvme0n1 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.518 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.775 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: ]] 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.776 nvme0n1 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.776 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: ]] 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.034 nvme0n1 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.034 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: ]] 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.293 21:46:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.293 nvme0n1 00:24:18.293 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.293 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.293 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.293 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.293 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.293 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.293 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.293 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.293 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.293 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.551 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.551 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.551 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:18.551 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.551 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:18.551 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:18.551 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:18.551 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:24:18.551 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:24:18.551 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:18.551 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:18.551 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:24:18.551 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: ]] 00:24:18.551 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:24:18.551 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:18.551 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.551 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:18.551 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:18.551 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:18.551 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.551 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:18.551 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.552 nvme0n1 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.552 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.817 nvme0n1 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: ]] 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.817 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.104 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.104 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.104 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:19.104 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:19.104 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:19.104 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.104 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.104 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:19.104 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.104 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:19.104 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:19.104 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:19.104 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:19.104 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.104 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.104 nvme0n1 00:24:19.104 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.104 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.104 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.104 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.104 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.363 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.363 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.363 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.363 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.363 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.363 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.363 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: ]] 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.364 21:46:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.622 nvme0n1 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: ]] 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.622 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.881 nvme0n1 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: ]] 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.881 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.139 nvme0n1 00:24:20.139 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.139 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.139 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.139 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.139 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.396 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:20.397 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:20.397 21:46:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:20.397 21:46:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:20.397 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.397 21:46:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.654 nvme0n1 00:24:20.654 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.654 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.654 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.654 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.654 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.654 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: ]] 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.655 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.219 nvme0n1 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: ]] 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:21.219 21:46:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:21.220 21:46:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:21.220 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.220 21:46:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.783 nvme0n1 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: ]] 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.783 21:46:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.346 nvme0n1 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: ]] 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.347 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.911 nvme0n1 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.911 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:22.912 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:22.912 21:46:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:22.912 21:46:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:22.912 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.912 21:46:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.475 nvme0n1 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjA5NzNjNjZmOGMyNGMzMTIzYjRmZDEwNGEyZDA5MzJo5ae5: 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: ]] 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTZkZTMwZWFlZjRjOGNjZWFlNDUxM2EyZjFlYTZlOGYyMjA1Y2VmYTk0YmRiOTc5ZmEwMTA0ZWYwYzNlMzY0MU2pxf8=: 00:24:23.475 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:23.476 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.476 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:23.476 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:23.476 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:23.476 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.476 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:23.476 21:46:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.476 21:46:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.476 21:46:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.476 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.476 21:46:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:23.476 21:46:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:23.476 21:46:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:23.476 21:46:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.476 21:46:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.476 21:46:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:23.476 21:46:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.476 21:46:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:23.476 21:46:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:23.476 21:46:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:23.476 21:46:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:23.476 21:46:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.476 21:46:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.407 nvme0n1 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: ]] 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.407 21:46:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.665 21:46:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.665 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.665 21:46:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:24.665 21:46:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:24.665 21:46:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:24.665 21:46:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.665 21:46:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.665 21:46:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:24.665 21:46:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.665 21:46:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:24.665 21:46:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:24.665 21:46:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:24.665 21:46:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:24.665 21:46:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.665 21:46:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.596 nvme0n1 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQzZjFkYWY4ZTlkZDc2NDdlMmIxNzE0YWMwNGI5ODlaj3VA: 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: ]] 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQwZWViM2Q3ZWM3YTZiOGVjZDZhZTNhNjk1NDMzNjPIo58I: 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.596 21:46:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.597 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.597 21:46:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:25.597 21:46:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:25.597 21:46:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:25.597 21:46:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.597 21:46:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.597 21:46:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:25.597 21:46:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.597 21:46:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:25.597 21:46:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:25.597 21:46:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:25.597 21:46:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:25.597 21:46:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.597 21:46:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.526 nvme0n1 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2NhOGFkM2VkN2JlZDBhZmZhZmY4YjVhMjcyZmRjMDc1NzIxZmM0N2EzY2ZhZDZmLCUTew==: 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: ]] 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzNjOGM1MTRiMmNlODNlNTlkMTE4YWMyMTRkZGRkYWWR9fCP: 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.526 21:46:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.457 nvme0n1 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTZhODc4Y2I4NjZlYjI2YjBkZjM5M2U1NTU0MWNjODQ5ZjFhMDBhZmQ5ZjYzMmNkMzQ4NmZhNjNhYTJlNjA3Nz+DXe4=: 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.457 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:27.458 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:27.458 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:27.458 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.458 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.458 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:27.458 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.458 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:27.458 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:27.458 21:46:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:27.458 21:46:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:27.458 21:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.458 21:46:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.389 nvme0n1 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODEyNTk4OWIwNDJlYjkyMTY4MDYyNzlkZTM1YzA3NDdjZjk5YmEzZDE2OWM5NzU47+1rPA==: 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: ]] 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjUwZTNjNWZlMzNmYmIyZjdlOWY3ZDU4MTNkZTJkZGRhZTk5MGRiOWRjMmUwODFkHtOl3Q==: 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.389 request: 00:24:28.389 { 00:24:28.389 "name": "nvme0", 00:24:28.389 "trtype": "tcp", 00:24:28.389 "traddr": "10.0.0.1", 00:24:28.389 "adrfam": "ipv4", 00:24:28.389 "trsvcid": "4420", 00:24:28.389 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:28.389 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:28.389 "prchk_reftag": false, 00:24:28.389 "prchk_guard": false, 00:24:28.389 "hdgst": false, 00:24:28.389 "ddgst": false, 00:24:28.389 "method": "bdev_nvme_attach_controller", 00:24:28.389 "req_id": 1 00:24:28.389 } 00:24:28.389 Got JSON-RPC error response 00:24:28.389 response: 00:24:28.389 { 00:24:28.389 "code": -5, 00:24:28.389 "message": "Input/output error" 00:24:28.389 } 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:28.389 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:28.390 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:28.390 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:28.390 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:28.390 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.390 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.647 request: 00:24:28.647 { 00:24:28.647 "name": "nvme0", 00:24:28.647 "trtype": "tcp", 00:24:28.647 "traddr": "10.0.0.1", 00:24:28.647 "adrfam": "ipv4", 00:24:28.647 "trsvcid": "4420", 00:24:28.647 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:28.647 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:28.647 "prchk_reftag": false, 00:24:28.647 "prchk_guard": false, 00:24:28.647 "hdgst": false, 00:24:28.647 "ddgst": false, 00:24:28.647 "dhchap_key": "key2", 00:24:28.647 "method": "bdev_nvme_attach_controller", 00:24:28.647 "req_id": 1 00:24:28.647 } 00:24:28.647 Got JSON-RPC error response 00:24:28.647 response: 00:24:28.647 { 00:24:28.647 "code": -5, 00:24:28.647 "message": "Input/output error" 00:24:28.647 } 00:24:28.647 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:28.647 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.648 request: 00:24:28.648 { 00:24:28.648 "name": "nvme0", 00:24:28.648 "trtype": "tcp", 00:24:28.648 "traddr": "10.0.0.1", 00:24:28.648 "adrfam": "ipv4", 00:24:28.648 "trsvcid": "4420", 00:24:28.648 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:28.648 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:28.648 "prchk_reftag": false, 00:24:28.648 "prchk_guard": false, 00:24:28.648 "hdgst": false, 00:24:28.648 "ddgst": false, 00:24:28.648 "dhchap_key": "key1", 00:24:28.648 "dhchap_ctrlr_key": "ckey2", 00:24:28.648 "method": "bdev_nvme_attach_controller", 00:24:28.648 "req_id": 1 00:24:28.648 } 00:24:28.648 Got JSON-RPC error response 00:24:28.648 response: 00:24:28.648 { 00:24:28.648 "code": -5, 00:24:28.648 "message": "Input/output error" 00:24:28.648 } 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:28.648 rmmod nvme_tcp 00:24:28.648 rmmod nvme_fabrics 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 422813 ']' 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 422813 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 422813 ']' 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 422813 00:24:28.648 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:24:28.906 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:28.906 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 422813 00:24:28.906 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:28.906 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:28.906 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 422813' 00:24:28.906 killing process with pid 422813 00:24:28.906 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 422813 00:24:28.906 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 422813 00:24:28.906 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:28.906 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:28.906 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:28.906 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:28.906 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:28.906 21:46:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.906 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.906 21:46:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.446 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:31.446 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:31.446 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:31.446 21:46:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:31.446 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:31.446 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:24:31.446 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:31.446 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:31.446 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:31.446 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:31.446 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:31.446 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:31.446 21:46:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:32.015 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:24:32.015 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:24:32.015 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:24:32.015 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:24:32.015 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:24:32.015 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:24:32.015 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:24:32.015 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:24:32.015 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:24:32.274 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:24:32.274 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:24:32.274 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:24:32.274 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:24:32.274 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:24:32.274 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:24:32.274 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:24:33.210 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:24:33.210 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.k7X /tmp/spdk.key-null.TXA /tmp/spdk.key-sha256.LuF /tmp/spdk.key-sha384.wJQ /tmp/spdk.key-sha512.eQ0 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:33.210 21:46:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:34.147 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:24:34.147 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:34.147 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:24:34.147 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:24:34.147 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:24:34.147 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:24:34.147 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:24:34.147 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:24:34.147 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:24:34.147 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:24:34.147 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:24:34.147 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:24:34.147 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:24:34.147 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:24:34.147 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:24:34.147 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:24:34.147 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:24:34.147 00:24:34.147 real 0m50.478s 00:24:34.147 user 0m46.889s 00:24:34.147 sys 0m5.246s 00:24:34.147 21:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:34.147 21:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.147 ************************************ 00:24:34.147 END TEST nvmf_auth_host 00:24:34.147 ************************************ 00:24:34.147 21:46:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:34.147 21:46:24 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:24:34.147 21:46:24 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:34.147 21:46:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:34.147 21:46:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:34.147 21:46:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:34.147 ************************************ 00:24:34.147 START TEST nvmf_digest 00:24:34.147 ************************************ 00:24:34.147 21:46:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:34.407 * Looking for test storage... 00:24:34.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:34.407 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:34.408 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:34.408 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:34.408 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:34.408 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.408 21:46:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:34.408 21:46:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.408 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:34.408 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:34.408 21:46:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:24:34.408 21:46:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:24:36.306 Found 0000:08:00.0 (0x8086 - 0x159b) 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:24:36.306 Found 0000:08:00.1 (0x8086 - 0x159b) 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:36.306 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:24:36.307 Found net devices under 0000:08:00.0: cvl_0_0 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:24:36.307 Found net devices under 0000:08:00.1: cvl_0_1 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:36.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:36.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:24:36.307 00:24:36.307 --- 10.0.0.2 ping statistics --- 00:24:36.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.307 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:36.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:36.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:24:36.307 00:24:36.307 --- 10.0.0.1 ping statistics --- 00:24:36.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.307 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:36.307 ************************************ 00:24:36.307 START TEST nvmf_digest_clean 00:24:36.307 ************************************ 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=430460 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 430460 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 430460 ']' 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:36.307 21:46:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.308 21:46:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:36.308 21:46:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:36.308 [2024-07-15 21:46:26.797241] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:24:36.308 [2024-07-15 21:46:26.797341] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.308 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.308 [2024-07-15 21:46:26.862900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.308 [2024-07-15 21:46:26.981283] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.308 [2024-07-15 21:46:26.981346] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.308 [2024-07-15 21:46:26.981363] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:36.308 [2024-07-15 21:46:26.981377] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:36.308 [2024-07-15 21:46:26.981389] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.308 [2024-07-15 21:46:26.981420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.308 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:36.308 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:36.308 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:36.308 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:36.308 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:36.308 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.308 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:36.308 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:36.308 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:36.308 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.308 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:36.566 null0 00:24:36.566 [2024-07-15 21:46:27.167700] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.566 [2024-07-15 21:46:27.191875] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.566 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.566 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:36.566 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:36.566 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:36.566 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:36.566 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:36.566 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:36.566 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:36.566 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=430577 00:24:36.566 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 430577 /var/tmp/bperf.sock 00:24:36.566 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 430577 ']' 00:24:36.566 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:36.566 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:36.566 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:36.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:36.566 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:36.566 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:36.566 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:36.566 [2024-07-15 21:46:27.249087] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:24:36.566 [2024-07-15 21:46:27.249192] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid430577 ] 00:24:36.566 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.566 [2024-07-15 21:46:27.311544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.824 [2024-07-15 21:46:27.414369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.824 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:36.824 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:36.824 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:36.824 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:36.824 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:37.081 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:37.081 21:46:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:37.646 nvme0n1 00:24:37.646 21:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:37.646 21:46:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:37.646 Running I/O for 2 seconds... 00:24:40.169 00:24:40.169 Latency(us) 00:24:40.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.169 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:40.169 nvme0n1 : 2.00 19574.57 76.46 0.00 0.00 6529.32 3179.71 14951.92 00:24:40.169 =================================================================================================================== 00:24:40.169 Total : 19574.57 76.46 0.00 0.00 6529.32 3179.71 14951.92 00:24:40.169 0 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:40.169 | select(.opcode=="crc32c") 00:24:40.169 | "\(.module_name) \(.executed)"' 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 430577 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 430577 ']' 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 430577 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 430577 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 430577' 00:24:40.169 killing process with pid 430577 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 430577 00:24:40.169 Received shutdown signal, test time was about 2.000000 seconds 00:24:40.169 00:24:40.169 Latency(us) 00:24:40.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.169 =================================================================================================================== 00:24:40.169 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 430577 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=430885 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 430885 /var/tmp/bperf.sock 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 430885 ']' 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:40.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:40.169 21:46:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:40.449 [2024-07-15 21:46:30.968520] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:24:40.449 [2024-07-15 21:46:30.968616] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid430885 ] 00:24:40.449 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:40.449 Zero copy mechanism will not be used. 00:24:40.449 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.449 [2024-07-15 21:46:31.022754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.449 [2024-07-15 21:46:31.118124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.449 21:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:40.449 21:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:40.450 21:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:40.450 21:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:40.450 21:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:41.015 21:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:41.015 21:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:41.272 nvme0n1 00:24:41.272 21:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:41.272 21:46:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:41.530 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:41.530 Zero copy mechanism will not be used. 00:24:41.530 Running I/O for 2 seconds... 00:24:43.425 00:24:43.425 Latency(us) 00:24:43.425 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.425 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:43.425 nvme0n1 : 2.00 4754.58 594.32 0.00 0.00 3361.05 837.40 5679.79 00:24:43.425 =================================================================================================================== 00:24:43.425 Total : 4754.58 594.32 0.00 0.00 3361.05 837.40 5679.79 00:24:43.425 0 00:24:43.425 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:43.425 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:43.425 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:43.425 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:43.425 | select(.opcode=="crc32c") 00:24:43.425 | "\(.module_name) \(.executed)"' 00:24:43.425 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:43.684 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:43.684 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:43.684 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:43.684 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:43.684 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 430885 00:24:43.684 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 430885 ']' 00:24:43.684 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 430885 00:24:43.684 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:43.684 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:43.685 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 430885 00:24:43.685 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:43.685 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:43.685 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 430885' 00:24:43.685 killing process with pid 430885 00:24:43.685 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 430885 00:24:43.685 Received shutdown signal, test time was about 2.000000 seconds 00:24:43.685 00:24:43.685 Latency(us) 00:24:43.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.685 =================================================================================================================== 00:24:43.685 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:43.685 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 430885 00:24:43.943 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:43.943 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:43.943 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:43.943 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:43.943 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:43.943 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:43.943 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:43.943 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=431202 00:24:43.943 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:43.943 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 431202 /var/tmp/bperf.sock 00:24:43.943 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 431202 ']' 00:24:43.943 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:43.943 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:43.943 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:43.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:43.943 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:43.943 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:43.943 [2024-07-15 21:46:34.675099] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:24:43.943 [2024-07-15 21:46:34.675214] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid431202 ] 00:24:43.943 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.943 [2024-07-15 21:46:34.729736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.201 [2024-07-15 21:46:34.826477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.201 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:44.201 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:44.201 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:44.201 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:44.201 21:46:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:44.767 21:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:44.767 21:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:45.025 nvme0n1 00:24:45.025 21:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:45.025 21:46:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:45.025 Running I/O for 2 seconds... 00:24:47.551 00:24:47.551 Latency(us) 00:24:47.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.551 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:47.551 nvme0n1 : 2.01 20899.08 81.64 0.00 0.00 6111.08 3046.21 13981.01 00:24:47.551 =================================================================================================================== 00:24:47.551 Total : 20899.08 81.64 0.00 0.00 6111.08 3046.21 13981.01 00:24:47.551 0 00:24:47.551 21:46:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:47.551 21:46:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:47.551 21:46:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:47.551 21:46:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:47.551 21:46:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:47.551 | select(.opcode=="crc32c") 00:24:47.551 | "\(.module_name) \(.executed)"' 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 431202 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 431202 ']' 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 431202 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 431202 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 431202' 00:24:47.551 killing process with pid 431202 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 431202 00:24:47.551 Received shutdown signal, test time was about 2.000000 seconds 00:24:47.551 00:24:47.551 Latency(us) 00:24:47.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.551 =================================================================================================================== 00:24:47.551 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 431202 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=431600 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 431600 /var/tmp/bperf.sock 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 431600 ']' 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:47.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:47.551 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:47.809 [2024-07-15 21:46:38.385916] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:24:47.809 [2024-07-15 21:46:38.386011] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid431600 ] 00:24:47.809 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:47.809 Zero copy mechanism will not be used. 00:24:47.809 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.809 [2024-07-15 21:46:38.441289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.809 [2024-07-15 21:46:38.543524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.809 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:47.809 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:47.809 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:47.809 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:47.809 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:48.374 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:48.374 21:46:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:48.632 nvme0n1 00:24:48.632 21:46:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:48.632 21:46:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:48.889 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:48.889 Zero copy mechanism will not be used. 00:24:48.889 Running I/O for 2 seconds... 00:24:50.786 00:24:50.786 Latency(us) 00:24:50.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.786 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:50.786 nvme0n1 : 2.00 5313.98 664.25 0.00 0.00 3003.32 2402.99 9854.67 00:24:50.786 =================================================================================================================== 00:24:50.786 Total : 5313.98 664.25 0.00 0.00 3003.32 2402.99 9854.67 00:24:50.786 0 00:24:50.786 21:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:50.786 21:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:50.786 21:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:50.786 21:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:50.786 21:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:50.786 | select(.opcode=="crc32c") 00:24:50.786 | "\(.module_name) \(.executed)"' 00:24:51.044 21:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:51.044 21:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:51.044 21:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:51.044 21:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:51.044 21:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 431600 00:24:51.044 21:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 431600 ']' 00:24:51.044 21:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 431600 00:24:51.044 21:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:51.044 21:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:51.044 21:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 431600 00:24:51.302 21:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:51.302 21:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:51.302 21:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 431600' 00:24:51.302 killing process with pid 431600 00:24:51.302 21:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 431600 00:24:51.302 Received shutdown signal, test time was about 2.000000 seconds 00:24:51.302 00:24:51.302 Latency(us) 00:24:51.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.302 =================================================================================================================== 00:24:51.302 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:51.302 21:46:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 431600 00:24:51.302 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 430460 00:24:51.302 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 430460 ']' 00:24:51.302 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 430460 00:24:51.302 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:51.302 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:51.302 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 430460 00:24:51.302 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:51.302 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:51.302 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 430460' 00:24:51.302 killing process with pid 430460 00:24:51.302 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 430460 00:24:51.302 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 430460 00:24:51.561 00:24:51.561 real 0m15.514s 00:24:51.561 user 0m30.736s 00:24:51.561 sys 0m4.083s 00:24:51.561 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:51.561 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:51.561 ************************************ 00:24:51.561 END TEST nvmf_digest_clean 00:24:51.561 ************************************ 00:24:51.561 21:46:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:51.561 21:46:42 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:51.561 21:46:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:51.561 21:46:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:51.561 21:46:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:51.561 ************************************ 00:24:51.561 START TEST nvmf_digest_error 00:24:51.561 ************************************ 00:24:51.561 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:24:51.561 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:51.561 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:51.561 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:51.561 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:51.561 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=431950 00:24:51.561 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:51.561 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 431950 00:24:51.561 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 431950 ']' 00:24:51.561 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.561 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:51.561 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.561 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:51.561 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:51.819 [2024-07-15 21:46:42.364857] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:24:51.819 [2024-07-15 21:46:42.364956] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.819 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.819 [2024-07-15 21:46:42.426504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.819 [2024-07-15 21:46:42.529150] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.819 [2024-07-15 21:46:42.529196] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.819 [2024-07-15 21:46:42.529210] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.819 [2024-07-15 21:46:42.529221] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.819 [2024-07-15 21:46:42.529231] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.819 [2024-07-15 21:46:42.529256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.819 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:51.819 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:51.819 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:51.819 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:51.819 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:51.819 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.819 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:51.819 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.819 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:51.819 [2024-07-15 21:46:42.589768] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:51.819 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.819 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:51.819 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:51.819 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.819 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:52.077 null0 00:24:52.077 [2024-07-15 21:46:42.689354] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.077 [2024-07-15 21:46:42.713520] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:52.077 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.077 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:52.077 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:52.077 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:52.078 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:52.078 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:52.078 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=432053 00:24:52.078 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 432053 /var/tmp/bperf.sock 00:24:52.078 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 432053 ']' 00:24:52.078 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:52.078 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:52.078 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:52.078 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:52.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:52.078 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:52.078 21:46:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:52.078 [2024-07-15 21:46:42.766001] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:24:52.078 [2024-07-15 21:46:42.766092] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid432053 ] 00:24:52.078 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.078 [2024-07-15 21:46:42.820170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.335 [2024-07-15 21:46:42.919710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.335 21:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:52.335 21:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:52.335 21:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:52.335 21:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:52.593 21:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:52.593 21:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.593 21:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:52.593 21:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.593 21:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:52.593 21:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:53.156 nvme0n1 00:24:53.156 21:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:53.156 21:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.156 21:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:53.156 21:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.156 21:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:53.156 21:46:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:53.156 Running I/O for 2 seconds... 00:24:53.156 [2024-07-15 21:46:43.881101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.156 [2024-07-15 21:46:43.881168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.156 [2024-07-15 21:46:43.881188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.156 [2024-07-15 21:46:43.894364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.156 [2024-07-15 21:46:43.894395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.156 [2024-07-15 21:46:43.894411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.156 [2024-07-15 21:46:43.905677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.156 [2024-07-15 21:46:43.905707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.156 [2024-07-15 21:46:43.905724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.156 [2024-07-15 21:46:43.920055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.156 [2024-07-15 21:46:43.920085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.156 [2024-07-15 21:46:43.920102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.156 [2024-07-15 21:46:43.934082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.156 [2024-07-15 21:46:43.934112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.156 [2024-07-15 21:46:43.934128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.156 [2024-07-15 21:46:43.947040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.156 [2024-07-15 21:46:43.947071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.156 [2024-07-15 21:46:43.947087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.413 [2024-07-15 21:46:43.960333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.413 [2024-07-15 21:46:43.960363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.413 [2024-07-15 21:46:43.960380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.414 [2024-07-15 21:46:43.973763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.414 [2024-07-15 21:46:43.973796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.414 [2024-07-15 21:46:43.973813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.414 [2024-07-15 21:46:43.985592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.414 [2024-07-15 21:46:43.985620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.414 [2024-07-15 21:46:43.985636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.414 [2024-07-15 21:46:43.999723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.414 [2024-07-15 21:46:43.999752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.414 [2024-07-15 21:46:43.999769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.414 [2024-07-15 21:46:44.013610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.414 [2024-07-15 21:46:44.013639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.414 [2024-07-15 21:46:44.013659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.414 [2024-07-15 21:46:44.025722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.414 [2024-07-15 21:46:44.025750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.414 [2024-07-15 21:46:44.025766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.414 [2024-07-15 21:46:44.038234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.414 [2024-07-15 21:46:44.038262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.414 [2024-07-15 21:46:44.038278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.414 [2024-07-15 21:46:44.051178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.414 [2024-07-15 21:46:44.051207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.414 [2024-07-15 21:46:44.051223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.414 [2024-07-15 21:46:44.064151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.414 [2024-07-15 21:46:44.064178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.414 [2024-07-15 21:46:44.064204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.414 [2024-07-15 21:46:44.077124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.414 [2024-07-15 21:46:44.077159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.414 [2024-07-15 21:46:44.077176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.414 [2024-07-15 21:46:44.090152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.414 [2024-07-15 21:46:44.090181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.414 [2024-07-15 21:46:44.090202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.414 [2024-07-15 21:46:44.103328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.414 [2024-07-15 21:46:44.103357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.414 [2024-07-15 21:46:44.103373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.414 [2024-07-15 21:46:44.116851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.414 [2024-07-15 21:46:44.116888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.414 [2024-07-15 21:46:44.116905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.414 [2024-07-15 21:46:44.129724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.414 [2024-07-15 21:46:44.129751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.414 [2024-07-15 21:46:44.129767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.414 [2024-07-15 21:46:44.142565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.414 [2024-07-15 21:46:44.142594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.414 [2024-07-15 21:46:44.142610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.414 [2024-07-15 21:46:44.155714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.414 [2024-07-15 21:46:44.155745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.414 [2024-07-15 21:46:44.155762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.414 [2024-07-15 21:46:44.168598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.414 [2024-07-15 21:46:44.168626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.414 [2024-07-15 21:46:44.168642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.414 [2024-07-15 21:46:44.181471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.414 [2024-07-15 21:46:44.181499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.414 [2024-07-15 21:46:44.181516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.414 [2024-07-15 21:46:44.194607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.414 [2024-07-15 21:46:44.194634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.414 [2024-07-15 21:46:44.194650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.671 [2024-07-15 21:46:44.207506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.671 [2024-07-15 21:46:44.207536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.671 [2024-07-15 21:46:44.207553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.671 [2024-07-15 21:46:44.220378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.671 [2024-07-15 21:46:44.220406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.671 [2024-07-15 21:46:44.220422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.671 [2024-07-15 21:46:44.234674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.671 [2024-07-15 21:46:44.234702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.671 [2024-07-15 21:46:44.234718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.671 [2024-07-15 21:46:44.247270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.671 [2024-07-15 21:46:44.247299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.671 [2024-07-15 21:46:44.247316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.671 [2024-07-15 21:46:44.261183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.671 [2024-07-15 21:46:44.261211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.671 [2024-07-15 21:46:44.261227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.671 [2024-07-15 21:46:44.274196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.671 [2024-07-15 21:46:44.274230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.671 [2024-07-15 21:46:44.274247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.671 [2024-07-15 21:46:44.286987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.671 [2024-07-15 21:46:44.287016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.671 [2024-07-15 21:46:44.287044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.671 [2024-07-15 21:46:44.300044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.671 [2024-07-15 21:46:44.300071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.671 [2024-07-15 21:46:44.300092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.671 [2024-07-15 21:46:44.311231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.671 [2024-07-15 21:46:44.311258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.671 [2024-07-15 21:46:44.311274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.671 [2024-07-15 21:46:44.326241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.671 [2024-07-15 21:46:44.326278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.671 [2024-07-15 21:46:44.326294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.671 [2024-07-15 21:46:44.340479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.671 [2024-07-15 21:46:44.340507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.671 [2024-07-15 21:46:44.340524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.671 [2024-07-15 21:46:44.351207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.671 [2024-07-15 21:46:44.351238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.671 [2024-07-15 21:46:44.351256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.671 [2024-07-15 21:46:44.365385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.671 [2024-07-15 21:46:44.365412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.671 [2024-07-15 21:46:44.365428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.671 [2024-07-15 21:46:44.378287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.671 [2024-07-15 21:46:44.378315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.671 [2024-07-15 21:46:44.378332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.671 [2024-07-15 21:46:44.391088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.671 [2024-07-15 21:46:44.391116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.671 [2024-07-15 21:46:44.391132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.671 [2024-07-15 21:46:44.405179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.671 [2024-07-15 21:46:44.405212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.671 [2024-07-15 21:46:44.405228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.671 [2024-07-15 21:46:44.418746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.671 [2024-07-15 21:46:44.418774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.671 [2024-07-15 21:46:44.418790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.671 [2024-07-15 21:46:44.431582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.671 [2024-07-15 21:46:44.431615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.671 [2024-07-15 21:46:44.431632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.672 [2024-07-15 21:46:44.444402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.672 [2024-07-15 21:46:44.444429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.672 [2024-07-15 21:46:44.444446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.672 [2024-07-15 21:46:44.457217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.672 [2024-07-15 21:46:44.457245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.672 [2024-07-15 21:46:44.457261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.929 [2024-07-15 21:46:44.470002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.929 [2024-07-15 21:46:44.470030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.929 [2024-07-15 21:46:44.470046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.929 [2024-07-15 21:46:44.482821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.929 [2024-07-15 21:46:44.482847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.929 [2024-07-15 21:46:44.482864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.929 [2024-07-15 21:46:44.495654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.929 [2024-07-15 21:46:44.495681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.929 [2024-07-15 21:46:44.495697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.929 [2024-07-15 21:46:44.508445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.929 [2024-07-15 21:46:44.508472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.929 [2024-07-15 21:46:44.508488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.929 [2024-07-15 21:46:44.521254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.929 [2024-07-15 21:46:44.521281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.929 [2024-07-15 21:46:44.521298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.929 [2024-07-15 21:46:44.534025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.929 [2024-07-15 21:46:44.534052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.929 [2024-07-15 21:46:44.534068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.929 [2024-07-15 21:46:44.546538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.929 [2024-07-15 21:46:44.546565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.929 [2024-07-15 21:46:44.546581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.929 [2024-07-15 21:46:44.559793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.929 [2024-07-15 21:46:44.559820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.929 [2024-07-15 21:46:44.559836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.929 [2024-07-15 21:46:44.573522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.929 [2024-07-15 21:46:44.573549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.929 [2024-07-15 21:46:44.573568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.929 [2024-07-15 21:46:44.584897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.929 [2024-07-15 21:46:44.584924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.929 [2024-07-15 21:46:44.584948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.929 [2024-07-15 21:46:44.599016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.929 [2024-07-15 21:46:44.599043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.929 [2024-07-15 21:46:44.599059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.929 [2024-07-15 21:46:44.613357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.929 [2024-07-15 21:46:44.613384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.929 [2024-07-15 21:46:44.613400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.929 [2024-07-15 21:46:44.626102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.929 [2024-07-15 21:46:44.626130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.929 [2024-07-15 21:46:44.626159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.929 [2024-07-15 21:46:44.637131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.929 [2024-07-15 21:46:44.637164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.929 [2024-07-15 21:46:44.637180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.929 [2024-07-15 21:46:44.650786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.929 [2024-07-15 21:46:44.650813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.929 [2024-07-15 21:46:44.650829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.929 [2024-07-15 21:46:44.663587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.929 [2024-07-15 21:46:44.663614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.929 [2024-07-15 21:46:44.663630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.929 [2024-07-15 21:46:44.676401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.929 [2024-07-15 21:46:44.676430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.929 [2024-07-15 21:46:44.676447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.929 [2024-07-15 21:46:44.689289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.929 [2024-07-15 21:46:44.689318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.929 [2024-07-15 21:46:44.689334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.929 [2024-07-15 21:46:44.702441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.929 [2024-07-15 21:46:44.702469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.929 [2024-07-15 21:46:44.702485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:53.929 [2024-07-15 21:46:44.715971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:53.929 [2024-07-15 21:46:44.715999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:53.929 [2024-07-15 21:46:44.716015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.187 [2024-07-15 21:46:44.729653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.187 [2024-07-15 21:46:44.729685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.187 [2024-07-15 21:46:44.729701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.187 [2024-07-15 21:46:44.742883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.187 [2024-07-15 21:46:44.742910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.187 [2024-07-15 21:46:44.742926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.187 [2024-07-15 21:46:44.755632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.187 [2024-07-15 21:46:44.755660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.187 [2024-07-15 21:46:44.755676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.187 [2024-07-15 21:46:44.768418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.187 [2024-07-15 21:46:44.768445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.187 [2024-07-15 21:46:44.768461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.187 [2024-07-15 21:46:44.781213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.187 [2024-07-15 21:46:44.781244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.187 [2024-07-15 21:46:44.781261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.187 [2024-07-15 21:46:44.794302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.187 [2024-07-15 21:46:44.794333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.187 [2024-07-15 21:46:44.794349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.187 [2024-07-15 21:46:44.807148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.187 [2024-07-15 21:46:44.807175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.187 [2024-07-15 21:46:44.807191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.187 [2024-07-15 21:46:44.819915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.187 [2024-07-15 21:46:44.819942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.187 [2024-07-15 21:46:44.819958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.187 [2024-07-15 21:46:44.832267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.187 [2024-07-15 21:46:44.832295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.187 [2024-07-15 21:46:44.832310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.187 [2024-07-15 21:46:44.847282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.187 [2024-07-15 21:46:44.847309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.187 [2024-07-15 21:46:44.847332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.187 [2024-07-15 21:46:44.858798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.187 [2024-07-15 21:46:44.858825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.187 [2024-07-15 21:46:44.858841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.187 [2024-07-15 21:46:44.871954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.187 [2024-07-15 21:46:44.871981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.187 [2024-07-15 21:46:44.871997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.187 [2024-07-15 21:46:44.884751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.187 [2024-07-15 21:46:44.884783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.187 [2024-07-15 21:46:44.884799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.187 [2024-07-15 21:46:44.897492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.187 [2024-07-15 21:46:44.897520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.187 [2024-07-15 21:46:44.897535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.187 [2024-07-15 21:46:44.910228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.187 [2024-07-15 21:46:44.910254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.187 [2024-07-15 21:46:44.910270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.187 [2024-07-15 21:46:44.922954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.187 [2024-07-15 21:46:44.922982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.187 [2024-07-15 21:46:44.922997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.187 [2024-07-15 21:46:44.935856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.187 [2024-07-15 21:46:44.935884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.187 [2024-07-15 21:46:44.935901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.187 [2024-07-15 21:46:44.948599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.187 [2024-07-15 21:46:44.948627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.187 [2024-07-15 21:46:44.948643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.187 [2024-07-15 21:46:44.961546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.187 [2024-07-15 21:46:44.961583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.187 [2024-07-15 21:46:44.961599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.187 [2024-07-15 21:46:44.974590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.187 [2024-07-15 21:46:44.974626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.187 [2024-07-15 21:46:44.974642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.445 [2024-07-15 21:46:44.987214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.445 [2024-07-15 21:46:44.987242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.445 [2024-07-15 21:46:44.987258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.445 [2024-07-15 21:46:44.999937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.445 [2024-07-15 21:46:44.999964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.445 [2024-07-15 21:46:44.999980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.445 [2024-07-15 21:46:45.012706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.445 [2024-07-15 21:46:45.012733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.445 [2024-07-15 21:46:45.012749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.445 [2024-07-15 21:46:45.025506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.445 [2024-07-15 21:46:45.025533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.445 [2024-07-15 21:46:45.025549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.445 [2024-07-15 21:46:45.038264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.445 [2024-07-15 21:46:45.038298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.445 [2024-07-15 21:46:45.038314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.445 [2024-07-15 21:46:45.053557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.445 [2024-07-15 21:46:45.053584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.445 [2024-07-15 21:46:45.053600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.445 [2024-07-15 21:46:45.066289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.445 [2024-07-15 21:46:45.066317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.445 [2024-07-15 21:46:45.066333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.445 [2024-07-15 21:46:45.079094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.445 [2024-07-15 21:46:45.079121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.445 [2024-07-15 21:46:45.079143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.445 [2024-07-15 21:46:45.089656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.445 [2024-07-15 21:46:45.089683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.445 [2024-07-15 21:46:45.089699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.445 [2024-07-15 21:46:45.102398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.445 [2024-07-15 21:46:45.102426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.445 [2024-07-15 21:46:45.102442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.445 [2024-07-15 21:46:45.116161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.445 [2024-07-15 21:46:45.116198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.445 [2024-07-15 21:46:45.116215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.445 [2024-07-15 21:46:45.128781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.445 [2024-07-15 21:46:45.128809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.445 [2024-07-15 21:46:45.128825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.445 [2024-07-15 21:46:45.141583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.445 [2024-07-15 21:46:45.141610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.445 [2024-07-15 21:46:45.141627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.445 [2024-07-15 21:46:45.154473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.445 [2024-07-15 21:46:45.154500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.445 [2024-07-15 21:46:45.154517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.446 [2024-07-15 21:46:45.168496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.446 [2024-07-15 21:46:45.168524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.446 [2024-07-15 21:46:45.168540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.446 [2024-07-15 21:46:45.179427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.446 [2024-07-15 21:46:45.179454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.446 [2024-07-15 21:46:45.179476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.446 [2024-07-15 21:46:45.194213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.446 [2024-07-15 21:46:45.194241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.446 [2024-07-15 21:46:45.194257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.446 [2024-07-15 21:46:45.208135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.446 [2024-07-15 21:46:45.208170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.446 [2024-07-15 21:46:45.208187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.446 [2024-07-15 21:46:45.219767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.446 [2024-07-15 21:46:45.219795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.446 [2024-07-15 21:46:45.219811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.446 [2024-07-15 21:46:45.232291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.446 [2024-07-15 21:46:45.232318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.446 [2024-07-15 21:46:45.232333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.703 [2024-07-15 21:46:45.247981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.703 [2024-07-15 21:46:45.248008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.703 [2024-07-15 21:46:45.248031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.703 [2024-07-15 21:46:45.258021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.703 [2024-07-15 21:46:45.258048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.703 [2024-07-15 21:46:45.258071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.703 [2024-07-15 21:46:45.272827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.703 [2024-07-15 21:46:45.272855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.703 [2024-07-15 21:46:45.272871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.703 [2024-07-15 21:46:45.286726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.703 [2024-07-15 21:46:45.286761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.703 [2024-07-15 21:46:45.286777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.703 [2024-07-15 21:46:45.297405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.703 [2024-07-15 21:46:45.297432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.703 [2024-07-15 21:46:45.297448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.703 [2024-07-15 21:46:45.313409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.703 [2024-07-15 21:46:45.313437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.703 [2024-07-15 21:46:45.313453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.703 [2024-07-15 21:46:45.326276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.704 [2024-07-15 21:46:45.326306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.704 [2024-07-15 21:46:45.326323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.704 [2024-07-15 21:46:45.339175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.704 [2024-07-15 21:46:45.339212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.704 [2024-07-15 21:46:45.339229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.704 [2024-07-15 21:46:45.352099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.704 [2024-07-15 21:46:45.352127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.704 [2024-07-15 21:46:45.352150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.704 [2024-07-15 21:46:45.365084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.704 [2024-07-15 21:46:45.365112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.704 [2024-07-15 21:46:45.365128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.704 [2024-07-15 21:46:45.378082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.704 [2024-07-15 21:46:45.378111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.704 [2024-07-15 21:46:45.378127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.704 [2024-07-15 21:46:45.391052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.704 [2024-07-15 21:46:45.391080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.704 [2024-07-15 21:46:45.391099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.704 [2024-07-15 21:46:45.404000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.704 [2024-07-15 21:46:45.404028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.704 [2024-07-15 21:46:45.404049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.704 [2024-07-15 21:46:45.416960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.704 [2024-07-15 21:46:45.416988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.704 [2024-07-15 21:46:45.417004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.704 [2024-07-15 21:46:45.430392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.704 [2024-07-15 21:46:45.430420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.704 [2024-07-15 21:46:45.430436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.704 [2024-07-15 21:46:45.443452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.704 [2024-07-15 21:46:45.443482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.704 [2024-07-15 21:46:45.443498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.704 [2024-07-15 21:46:45.456370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.704 [2024-07-15 21:46:45.456398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.704 [2024-07-15 21:46:45.456414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.704 [2024-07-15 21:46:45.469283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.704 [2024-07-15 21:46:45.469311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.704 [2024-07-15 21:46:45.469328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.704 [2024-07-15 21:46:45.482214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.704 [2024-07-15 21:46:45.482242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.704 [2024-07-15 21:46:45.482259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.704 [2024-07-15 21:46:45.495071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.704 [2024-07-15 21:46:45.495099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.704 [2024-07-15 21:46:45.495116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.961 [2024-07-15 21:46:45.508060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.961 [2024-07-15 21:46:45.508088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.961 [2024-07-15 21:46:45.508108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.961 [2024-07-15 21:46:45.521021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.961 [2024-07-15 21:46:45.521056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.961 [2024-07-15 21:46:45.521072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.961 [2024-07-15 21:46:45.534061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.961 [2024-07-15 21:46:45.534089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.961 [2024-07-15 21:46:45.534106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.961 [2024-07-15 21:46:45.547031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.962 [2024-07-15 21:46:45.547058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.962 [2024-07-15 21:46:45.547074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.962 [2024-07-15 21:46:45.559718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.962 [2024-07-15 21:46:45.559748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.962 [2024-07-15 21:46:45.559765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.962 [2024-07-15 21:46:45.572900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.962 [2024-07-15 21:46:45.572928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.962 [2024-07-15 21:46:45.572944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.962 [2024-07-15 21:46:45.585857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.962 [2024-07-15 21:46:45.585884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.962 [2024-07-15 21:46:45.585901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.962 [2024-07-15 21:46:45.600639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.962 [2024-07-15 21:46:45.600666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.962 [2024-07-15 21:46:45.600686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.962 [2024-07-15 21:46:45.613472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.962 [2024-07-15 21:46:45.613501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.962 [2024-07-15 21:46:45.613517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.962 [2024-07-15 21:46:45.624682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.962 [2024-07-15 21:46:45.624709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.962 [2024-07-15 21:46:45.624725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.962 [2024-07-15 21:46:45.639067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.962 [2024-07-15 21:46:45.639095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.962 [2024-07-15 21:46:45.639111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.962 [2024-07-15 21:46:45.652703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.962 [2024-07-15 21:46:45.652730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.962 [2024-07-15 21:46:45.652747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.962 [2024-07-15 21:46:45.666832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.962 [2024-07-15 21:46:45.666859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.962 [2024-07-15 21:46:45.666875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.962 [2024-07-15 21:46:45.678222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.962 [2024-07-15 21:46:45.678249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.962 [2024-07-15 21:46:45.678265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.962 [2024-07-15 21:46:45.691533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.962 [2024-07-15 21:46:45.691561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.962 [2024-07-15 21:46:45.691577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.962 [2024-07-15 21:46:45.705095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.962 [2024-07-15 21:46:45.705124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.962 [2024-07-15 21:46:45.705148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.962 [2024-07-15 21:46:45.717957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.962 [2024-07-15 21:46:45.717985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.962 [2024-07-15 21:46:45.718002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.962 [2024-07-15 21:46:45.730844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.962 [2024-07-15 21:46:45.730872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.962 [2024-07-15 21:46:45.730888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:54.962 [2024-07-15 21:46:45.743844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:54.962 [2024-07-15 21:46:45.743872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:54.962 [2024-07-15 21:46:45.743900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.219 [2024-07-15 21:46:45.756762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:55.219 [2024-07-15 21:46:45.756792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.219 [2024-07-15 21:46:45.756808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.219 [2024-07-15 21:46:45.769646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:55.219 [2024-07-15 21:46:45.769674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.219 [2024-07-15 21:46:45.769690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.219 [2024-07-15 21:46:45.782556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:55.219 [2024-07-15 21:46:45.782584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.219 [2024-07-15 21:46:45.782600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.219 [2024-07-15 21:46:45.795479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:55.219 [2024-07-15 21:46:45.795508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.219 [2024-07-15 21:46:45.795525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.219 [2024-07-15 21:46:45.808471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:55.219 [2024-07-15 21:46:45.808500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.219 [2024-07-15 21:46:45.808518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.219 [2024-07-15 21:46:45.821394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:55.219 [2024-07-15 21:46:45.821422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.219 [2024-07-15 21:46:45.821438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.219 [2024-07-15 21:46:45.834322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:55.219 [2024-07-15 21:46:45.834351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.219 [2024-07-15 21:46:45.834367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.219 [2024-07-15 21:46:45.847223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:55.219 [2024-07-15 21:46:45.847251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.219 [2024-07-15 21:46:45.847267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.219 [2024-07-15 21:46:45.861146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1281100) 00:24:55.219 [2024-07-15 21:46:45.861174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.219 [2024-07-15 21:46:45.861207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:55.219 00:24:55.219 Latency(us) 00:24:55.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.219 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:55.219 nvme0n1 : 2.01 19520.03 76.25 0.00 0.00 6548.64 3495.25 20194.80 00:24:55.219 =================================================================================================================== 00:24:55.219 Total : 19520.03 76.25 0.00 0.00 6548.64 3495.25 20194.80 00:24:55.219 0 00:24:55.219 21:46:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:55.219 21:46:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:55.219 21:46:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:55.219 | .driver_specific 00:24:55.219 | .nvme_error 00:24:55.219 | .status_code 00:24:55.219 | .command_transient_transport_error' 00:24:55.219 21:46:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:55.477 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 153 > 0 )) 00:24:55.477 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 432053 00:24:55.477 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 432053 ']' 00:24:55.477 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 432053 00:24:55.477 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:55.477 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:55.477 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 432053 00:24:55.477 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:55.477 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:55.477 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 432053' 00:24:55.477 killing process with pid 432053 00:24:55.477 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 432053 00:24:55.477 Received shutdown signal, test time was about 2.000000 seconds 00:24:55.477 00:24:55.477 Latency(us) 00:24:55.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.477 =================================================================================================================== 00:24:55.477 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:55.477 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 432053 00:24:55.734 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:55.734 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:55.734 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:55.734 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:55.734 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:55.734 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:55.734 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=432368 00:24:55.734 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 432368 /var/tmp/bperf.sock 00:24:55.734 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 432368 ']' 00:24:55.734 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:55.734 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:55.734 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:55.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:55.734 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:55.734 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:55.734 [2024-07-15 21:46:46.434557] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:24:55.734 [2024-07-15 21:46:46.434640] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid432368 ] 00:24:55.734 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:55.734 Zero copy mechanism will not be used. 00:24:55.734 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.734 [2024-07-15 21:46:46.482997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.991 [2024-07-15 21:46:46.580660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.991 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:55.991 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:55.991 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:55.991 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:56.249 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:56.249 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.249 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:56.249 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.249 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:56.249 21:46:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:56.815 nvme0n1 00:24:56.815 21:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:56.815 21:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.815 21:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:56.815 21:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.815 21:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:56.815 21:46:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:57.074 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:57.074 Zero copy mechanism will not be used. 00:24:57.074 Running I/O for 2 seconds... 00:24:57.074 [2024-07-15 21:46:47.630701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.074 [2024-07-15 21:46:47.630751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.074 [2024-07-15 21:46:47.630771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.074 [2024-07-15 21:46:47.637360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.074 [2024-07-15 21:46:47.637390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.074 [2024-07-15 21:46:47.637407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.074 [2024-07-15 21:46:47.644811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.074 [2024-07-15 21:46:47.644841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.074 [2024-07-15 21:46:47.644857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.074 [2024-07-15 21:46:47.650558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.074 [2024-07-15 21:46:47.650587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.074 [2024-07-15 21:46:47.650604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.074 [2024-07-15 21:46:47.655730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.074 [2024-07-15 21:46:47.655758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.074 [2024-07-15 21:46:47.655775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.074 [2024-07-15 21:46:47.660774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.074 [2024-07-15 21:46:47.660803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.074 [2024-07-15 21:46:47.660819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.074 [2024-07-15 21:46:47.666734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.074 [2024-07-15 21:46:47.666763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.074 [2024-07-15 21:46:47.666779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.074 [2024-07-15 21:46:47.672345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.074 [2024-07-15 21:46:47.672374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.074 [2024-07-15 21:46:47.672390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.074 [2024-07-15 21:46:47.678919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.074 [2024-07-15 21:46:47.678948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.074 [2024-07-15 21:46:47.678971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.074 [2024-07-15 21:46:47.685644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.074 [2024-07-15 21:46:47.685672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.074 [2024-07-15 21:46:47.685689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.074 [2024-07-15 21:46:47.692497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.074 [2024-07-15 21:46:47.692524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.074 [2024-07-15 21:46:47.692540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.074 [2024-07-15 21:46:47.698855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.074 [2024-07-15 21:46:47.698882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.074 [2024-07-15 21:46:47.698898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.074 [2024-07-15 21:46:47.702195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.074 [2024-07-15 21:46:47.702222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.074 [2024-07-15 21:46:47.702238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.074 [2024-07-15 21:46:47.707507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.074 [2024-07-15 21:46:47.707534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.074 [2024-07-15 21:46:47.707550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.074 [2024-07-15 21:46:47.713030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.074 [2024-07-15 21:46:47.713057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.074 [2024-07-15 21:46:47.713072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.074 [2024-07-15 21:46:47.718629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.074 [2024-07-15 21:46:47.718656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.074 [2024-07-15 21:46:47.718672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.074 [2024-07-15 21:46:47.724204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.074 [2024-07-15 21:46:47.724231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.724247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.729807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.729843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.729859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.735498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.735526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.735542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.742036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.742064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.742080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.747347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.747374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.747390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.752365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.752393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.752409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.756710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.756738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.756754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.761310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.761337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.761353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.766190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.766217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.766233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.770573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.770600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.770615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.774850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.774876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.774891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.779157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.779183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.779198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.783562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.783588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.783603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.788815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.788843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.788859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.793737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.793764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.793780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.798152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.798179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.798202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.802567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.802593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.802608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.806987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.807013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.807028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.811423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.811449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.811469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.815823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.815849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.815864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.820728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.820755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.820771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.825915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.825941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.825957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.830365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.830391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.830407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.834866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.834892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.834908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.839199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.839224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.839240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.843466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.843492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.843508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.847499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.847525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.847541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.850841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.850872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.850888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.854925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.854951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.854967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.075 [2024-07-15 21:46:47.860282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.075 [2024-07-15 21:46:47.860309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.075 [2024-07-15 21:46:47.860324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.076 [2024-07-15 21:46:47.864828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.076 [2024-07-15 21:46:47.864856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.076 [2024-07-15 21:46:47.864872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.869809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.869837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.335 [2024-07-15 21:46:47.869866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.874750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.874790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.335 [2024-07-15 21:46:47.874807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.880392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.880419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.335 [2024-07-15 21:46:47.880447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.887559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.887588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.335 [2024-07-15 21:46:47.887616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.894248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.894280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.335 [2024-07-15 21:46:47.894298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.899825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.899854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.335 [2024-07-15 21:46:47.899870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.904905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.904933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.335 [2024-07-15 21:46:47.904949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.911064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.911092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.335 [2024-07-15 21:46:47.911108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.915749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.915778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.335 [2024-07-15 21:46:47.915793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.920507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.920534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.335 [2024-07-15 21:46:47.920550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.925630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.925658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.335 [2024-07-15 21:46:47.925675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.931012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.931040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.335 [2024-07-15 21:46:47.931057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.936899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.936927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.335 [2024-07-15 21:46:47.936943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.942043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.942070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.335 [2024-07-15 21:46:47.942092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.946834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.946861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.335 [2024-07-15 21:46:47.946877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.951645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.951672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.335 [2024-07-15 21:46:47.951688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.956664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.956692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.335 [2024-07-15 21:46:47.956708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.961489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.961517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.335 [2024-07-15 21:46:47.961532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.965929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.965956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.335 [2024-07-15 21:46:47.965971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.970901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.970928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.335 [2024-07-15 21:46:47.970945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.974557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.974585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.335 [2024-07-15 21:46:47.974601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.979390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.979418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.335 [2024-07-15 21:46:47.979434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.984509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.984537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.335 [2024-07-15 21:46:47.984553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.990015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.990043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.335 [2024-07-15 21:46:47.990059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.335 [2024-07-15 21:46:47.995391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.335 [2024-07-15 21:46:47.995419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:47.995435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.336 [2024-07-15 21:46:48.000390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.336 [2024-07-15 21:46:48.000418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:48.000434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.336 [2024-07-15 21:46:48.005077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.336 [2024-07-15 21:46:48.005106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:48.005122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.336 [2024-07-15 21:46:48.010132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.336 [2024-07-15 21:46:48.010168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:48.010184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.336 [2024-07-15 21:46:48.015181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.336 [2024-07-15 21:46:48.015209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:48.015225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.336 [2024-07-15 21:46:48.020970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.336 [2024-07-15 21:46:48.020997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:48.021013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.336 [2024-07-15 21:46:48.028206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.336 [2024-07-15 21:46:48.028266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:48.028288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.336 [2024-07-15 21:46:48.034957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.336 [2024-07-15 21:46:48.034985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:48.035002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.336 [2024-07-15 21:46:48.041905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.336 [2024-07-15 21:46:48.041933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:48.041949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.336 [2024-07-15 21:46:48.047457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.336 [2024-07-15 21:46:48.047485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:48.047502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.336 [2024-07-15 21:46:48.052480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.336 [2024-07-15 21:46:48.052508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:48.052524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.336 [2024-07-15 21:46:48.056432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.336 [2024-07-15 21:46:48.056460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:48.056476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.336 [2024-07-15 21:46:48.061474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.336 [2024-07-15 21:46:48.061502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:48.061518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.336 [2024-07-15 21:46:48.066572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.336 [2024-07-15 21:46:48.066600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:48.066616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.336 [2024-07-15 21:46:48.071020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.336 [2024-07-15 21:46:48.071046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:48.071062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.336 [2024-07-15 21:46:48.076136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.336 [2024-07-15 21:46:48.076175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:48.076192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.336 [2024-07-15 21:46:48.082372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.336 [2024-07-15 21:46:48.082400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:48.082416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.336 [2024-07-15 21:46:48.088210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.336 [2024-07-15 21:46:48.088238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:48.088255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.336 [2024-07-15 21:46:48.093305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.336 [2024-07-15 21:46:48.093333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:48.093349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.336 [2024-07-15 21:46:48.098442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.336 [2024-07-15 21:46:48.098469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:48.098485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.336 [2024-07-15 21:46:48.102957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.336 [2024-07-15 21:46:48.102984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:48.103000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.336 [2024-07-15 21:46:48.108132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.336 [2024-07-15 21:46:48.108174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:48.108190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.336 [2024-07-15 21:46:48.114604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.336 [2024-07-15 21:46:48.114631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:48.114647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.336 [2024-07-15 21:46:48.120466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.336 [2024-07-15 21:46:48.120495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:48.120510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.336 [2024-07-15 21:46:48.125674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.336 [2024-07-15 21:46:48.125703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.336 [2024-07-15 21:46:48.125719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.596 [2024-07-15 21:46:48.130550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.130591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.130609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.597 [2024-07-15 21:46:48.135590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.135618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.135649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.597 [2024-07-15 21:46:48.140982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.141030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.141047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.597 [2024-07-15 21:46:48.147701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.147733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.147750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.597 [2024-07-15 21:46:48.153277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.153307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.153324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.597 [2024-07-15 21:46:48.158516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.158544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.158560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.597 [2024-07-15 21:46:48.163867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.163894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.163911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.597 [2024-07-15 21:46:48.169448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.169476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.169499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.597 [2024-07-15 21:46:48.175150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.175177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.175199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.597 [2024-07-15 21:46:48.180575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.180603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.180619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.597 [2024-07-15 21:46:48.185765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.185792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.185808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.597 [2024-07-15 21:46:48.191554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.191590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.191605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.597 [2024-07-15 21:46:48.196753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.196780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.196796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.597 [2024-07-15 21:46:48.200649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.200678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.200693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.597 [2024-07-15 21:46:48.203189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.203215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.203231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.597 [2024-07-15 21:46:48.207889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.207916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.207931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.597 [2024-07-15 21:46:48.214006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.214038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.214054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.597 [2024-07-15 21:46:48.221384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.221414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.221429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.597 [2024-07-15 21:46:48.227189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.227226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.227242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.597 [2024-07-15 21:46:48.232776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.232812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.232827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.597 [2024-07-15 21:46:48.238508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.238536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.238552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.597 [2024-07-15 21:46:48.245079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.245107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.245123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.597 [2024-07-15 21:46:48.250032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.250068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.250084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.597 [2024-07-15 21:46:48.255195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.255222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.255238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.597 [2024-07-15 21:46:48.259808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.597 [2024-07-15 21:46:48.259837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.597 [2024-07-15 21:46:48.259853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.263049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.263075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.263091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.266704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.266732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.266747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.271236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.271263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.271279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.276521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.276550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.276566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.282332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.282361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.282377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.288072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.288100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.288116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.293774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.293803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.293819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.299502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.299531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.299547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.304276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.304303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.304325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.308964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.308991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.309007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.313989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.314016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.314032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.319073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.319100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.319116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.324144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.324172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.324188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.330304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.330332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.330348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.337430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.337458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.337474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.343017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.343045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.343061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.347026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.347053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.347069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.352385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.352420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.352436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.357460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.357487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.357502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.362399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.362435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.362450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.366790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.366817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.366833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.371170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.371197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.371213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.375456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.375490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.375505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.379871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.379897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.379912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.598 [2024-07-15 21:46:48.384817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.598 [2024-07-15 21:46:48.384845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.598 [2024-07-15 21:46:48.384861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.858 [2024-07-15 21:46:48.390428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.858 [2024-07-15 21:46:48.390458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-07-15 21:46:48.390487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.858 [2024-07-15 21:46:48.396228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.858 [2024-07-15 21:46:48.396257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-07-15 21:46:48.396285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.858 [2024-07-15 21:46:48.402363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.858 [2024-07-15 21:46:48.402405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-07-15 21:46:48.402433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.858 [2024-07-15 21:46:48.407972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.858 [2024-07-15 21:46:48.408003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-07-15 21:46:48.408019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.858 [2024-07-15 21:46:48.413445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.858 [2024-07-15 21:46:48.413474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-07-15 21:46:48.413491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.858 [2024-07-15 21:46:48.418606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.858 [2024-07-15 21:46:48.418634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-07-15 21:46:48.418649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.858 [2024-07-15 21:46:48.424357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.858 [2024-07-15 21:46:48.424386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-07-15 21:46:48.424402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.858 [2024-07-15 21:46:48.429633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.858 [2024-07-15 21:46:48.429661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-07-15 21:46:48.429677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.858 [2024-07-15 21:46:48.433718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.858 [2024-07-15 21:46:48.433747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-07-15 21:46:48.433763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.858 [2024-07-15 21:46:48.438193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.858 [2024-07-15 21:46:48.438221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-07-15 21:46:48.438243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.858 [2024-07-15 21:46:48.442552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.858 [2024-07-15 21:46:48.442579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-07-15 21:46:48.442594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.858 [2024-07-15 21:46:48.446990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.858 [2024-07-15 21:46:48.447025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-07-15 21:46:48.447040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.858 [2024-07-15 21:46:48.451390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.858 [2024-07-15 21:46:48.451417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.858 [2024-07-15 21:46:48.451432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.858 [2024-07-15 21:46:48.455855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.858 [2024-07-15 21:46:48.455881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.455896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.460296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.460325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.460341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.464610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.464636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.464651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.468474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.468502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.468517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.470940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.470966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.470981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.475893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.475926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.475942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.480249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.480277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.480293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.485079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.485106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.485121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.489424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.489450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.489478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.493833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.493868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.493883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.498305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.498332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.498348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.503180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.503206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.503222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.508635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.508664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.508680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.514046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.514074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.514089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.519287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.519314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.519330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.524871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.524907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.524923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.529855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.529891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.529906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.534626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.534669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.534686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.539498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.539533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.539548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.544401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.544429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.544445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.549434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.549462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.549478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.554064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.554093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.554109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.559120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.559155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.559179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.564831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.564859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.564875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.570451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.570479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.570495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.576264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.576292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.576308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.581275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.581303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.581318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.585896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.585924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.585940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.590384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.590412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.590428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.859 [2024-07-15 21:46:48.594851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.859 [2024-07-15 21:46:48.594877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.859 [2024-07-15 21:46:48.594892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.860 [2024-07-15 21:46:48.599331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.860 [2024-07-15 21:46:48.599357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.860 [2024-07-15 21:46:48.599372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.860 [2024-07-15 21:46:48.603898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.860 [2024-07-15 21:46:48.603946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.860 [2024-07-15 21:46:48.603963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.860 [2024-07-15 21:46:48.609483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.860 [2024-07-15 21:46:48.609512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.860 [2024-07-15 21:46:48.609528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.860 [2024-07-15 21:46:48.614004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.860 [2024-07-15 21:46:48.614033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.860 [2024-07-15 21:46:48.614048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.860 [2024-07-15 21:46:48.618567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.860 [2024-07-15 21:46:48.618595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.860 [2024-07-15 21:46:48.618611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.860 [2024-07-15 21:46:48.623029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.860 [2024-07-15 21:46:48.623057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.860 [2024-07-15 21:46:48.623073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.860 [2024-07-15 21:46:48.627567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.860 [2024-07-15 21:46:48.627595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.860 [2024-07-15 21:46:48.627611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.860 [2024-07-15 21:46:48.632121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.860 [2024-07-15 21:46:48.632156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.860 [2024-07-15 21:46:48.632173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:57.860 [2024-07-15 21:46:48.636705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.860 [2024-07-15 21:46:48.636733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.860 [2024-07-15 21:46:48.636749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:57.860 [2024-07-15 21:46:48.642058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.860 [2024-07-15 21:46:48.642089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.860 [2024-07-15 21:46:48.642113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:57.860 [2024-07-15 21:46:48.646741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:57.860 [2024-07-15 21:46:48.646771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.860 [2024-07-15 21:46:48.646787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.119 [2024-07-15 21:46:48.651939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.119 [2024-07-15 21:46:48.651984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.119 [2024-07-15 21:46:48.652013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.119 [2024-07-15 21:46:48.656748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.119 [2024-07-15 21:46:48.656783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.119 [2024-07-15 21:46:48.656800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.119 [2024-07-15 21:46:48.661257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.119 [2024-07-15 21:46:48.661287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.119 [2024-07-15 21:46:48.661304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.119 [2024-07-15 21:46:48.665752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.119 [2024-07-15 21:46:48.665781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.119 [2024-07-15 21:46:48.665797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.119 [2024-07-15 21:46:48.670183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.119 [2024-07-15 21:46:48.670211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.119 [2024-07-15 21:46:48.670227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.119 [2024-07-15 21:46:48.674529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.119 [2024-07-15 21:46:48.674557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.119 [2024-07-15 21:46:48.674586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.119 [2024-07-15 21:46:48.678885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.119 [2024-07-15 21:46:48.678911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.119 [2024-07-15 21:46:48.678939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.119 [2024-07-15 21:46:48.683236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.119 [2024-07-15 21:46:48.683270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.119 [2024-07-15 21:46:48.683287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.119 [2024-07-15 21:46:48.688306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.119 [2024-07-15 21:46:48.688335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.119 [2024-07-15 21:46:48.688350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.119 [2024-07-15 21:46:48.693785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.119 [2024-07-15 21:46:48.693814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.119 [2024-07-15 21:46:48.693830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.119 [2024-07-15 21:46:48.699802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.119 [2024-07-15 21:46:48.699832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.119 [2024-07-15 21:46:48.699848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.119 [2024-07-15 21:46:48.707254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.119 [2024-07-15 21:46:48.707284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.119 [2024-07-15 21:46:48.707301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.119 [2024-07-15 21:46:48.713619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.119 [2024-07-15 21:46:48.713652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.119 [2024-07-15 21:46:48.713669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.119 [2024-07-15 21:46:48.720823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.119 [2024-07-15 21:46:48.720854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.119 [2024-07-15 21:46:48.720870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.119 [2024-07-15 21:46:48.727761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.119 [2024-07-15 21:46:48.727794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.119 [2024-07-15 21:46:48.727811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.119 [2024-07-15 21:46:48.733829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.119 [2024-07-15 21:46:48.733859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.119 [2024-07-15 21:46:48.733875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.741302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.741333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.741349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.748373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.748405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.748421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.756202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.756233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.756249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.760208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.760238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.760254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.767512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.767543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.767560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.774213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.774244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.774260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.781309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.781340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.781356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.788359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.788389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.788406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.793723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.793751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.793775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.798040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.798068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.798084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.802403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.802431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.802447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.807333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.807362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.807377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.812659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.812687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.812703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.817205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.817233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.817249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.822314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.822343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.822359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.826400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.826430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.826446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.830709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.830737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.830753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.835435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.835472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.835488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.840425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.840453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.840469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.846188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.846219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.846235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.851397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.851426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.851442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.856580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.856611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.856627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.862198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.862228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.862244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.867315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.867343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.867359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.872709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.120 [2024-07-15 21:46:48.872738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.120 [2024-07-15 21:46:48.872754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.120 [2024-07-15 21:46:48.878136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.121 [2024-07-15 21:46:48.878170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.121 [2024-07-15 21:46:48.878186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.121 [2024-07-15 21:46:48.882638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.121 [2024-07-15 21:46:48.882668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.121 [2024-07-15 21:46:48.882684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.121 [2024-07-15 21:46:48.886313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.121 [2024-07-15 21:46:48.886342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.121 [2024-07-15 21:46:48.886358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.121 [2024-07-15 21:46:48.891121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.121 [2024-07-15 21:46:48.891155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.121 [2024-07-15 21:46:48.891172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.121 [2024-07-15 21:46:48.895967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.121 [2024-07-15 21:46:48.895995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.121 [2024-07-15 21:46:48.896011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.121 [2024-07-15 21:46:48.901368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.121 [2024-07-15 21:46:48.901397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.121 [2024-07-15 21:46:48.901413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.121 [2024-07-15 21:46:48.907175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.121 [2024-07-15 21:46:48.907234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.121 [2024-07-15 21:46:48.907263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.381 [2024-07-15 21:46:48.913637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.381 [2024-07-15 21:46:48.913673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.381 [2024-07-15 21:46:48.913690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.381 [2024-07-15 21:46:48.920560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.381 [2024-07-15 21:46:48.920595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.381 [2024-07-15 21:46:48.920613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.381 [2024-07-15 21:46:48.928156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.381 [2024-07-15 21:46:48.928188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.381 [2024-07-15 21:46:48.928213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.381 [2024-07-15 21:46:48.932472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.381 [2024-07-15 21:46:48.932502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.381 [2024-07-15 21:46:48.932518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.381 [2024-07-15 21:46:48.938389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.381 [2024-07-15 21:46:48.938420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.381 [2024-07-15 21:46:48.938435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.381 [2024-07-15 21:46:48.943823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.381 [2024-07-15 21:46:48.943852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.381 [2024-07-15 21:46:48.943869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.381 [2024-07-15 21:46:48.948552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.381 [2024-07-15 21:46:48.948580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.381 [2024-07-15 21:46:48.948597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.381 [2024-07-15 21:46:48.953463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.381 [2024-07-15 21:46:48.953492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.381 [2024-07-15 21:46:48.953508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.381 [2024-07-15 21:46:48.958432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.381 [2024-07-15 21:46:48.958460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.381 [2024-07-15 21:46:48.958476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.381 [2024-07-15 21:46:48.963028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.381 [2024-07-15 21:46:48.963056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.381 [2024-07-15 21:46:48.963073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.381 [2024-07-15 21:46:48.967952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.381 [2024-07-15 21:46:48.967981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.381 [2024-07-15 21:46:48.967997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.381 [2024-07-15 21:46:48.973062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.381 [2024-07-15 21:46:48.973101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.381 [2024-07-15 21:46:48.973119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.381 [2024-07-15 21:46:48.978585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.381 [2024-07-15 21:46:48.978616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.381 [2024-07-15 21:46:48.978631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.381 [2024-07-15 21:46:48.983611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.381 [2024-07-15 21:46:48.983641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:48.983658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:48.989439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:48.989469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:48.989485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:48.993792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:48.993820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:48.993835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:48.998136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:48.998173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:48.998190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.002530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.002560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.002575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.006903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.006930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.006946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.011218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.011246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.011261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.015564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.015592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.015607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.020565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.020593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.020608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.025626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.025653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.025669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.029943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.029970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.029986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.034280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.034308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.034323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.038716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.038744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.038759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.042985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.043015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.043031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.048635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.048664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.048680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.053267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.053295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.053323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.058247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.058282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.058298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.063335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.063366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.063382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.068487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.068516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.068532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.073696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.073726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.073742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.078732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.078761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.078776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.083866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.083895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.083911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.088776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.088806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.088821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.092278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.092307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.092323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.096338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.096374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.096391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.101066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.101096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.101112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.105674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.105703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.105719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.110087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.110118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.110135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.113129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.113164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.382 [2024-07-15 21:46:49.113180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.382 [2024-07-15 21:46:49.119159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.382 [2024-07-15 21:46:49.119192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.383 [2024-07-15 21:46:49.119223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.383 [2024-07-15 21:46:49.124451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.383 [2024-07-15 21:46:49.124480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.383 [2024-07-15 21:46:49.124496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.383 [2024-07-15 21:46:49.129603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.383 [2024-07-15 21:46:49.129632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.383 [2024-07-15 21:46:49.129647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.383 [2024-07-15 21:46:49.134172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.383 [2024-07-15 21:46:49.134200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.383 [2024-07-15 21:46:49.134228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.383 [2024-07-15 21:46:49.138927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.383 [2024-07-15 21:46:49.138958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.383 [2024-07-15 21:46:49.138975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.383 [2024-07-15 21:46:49.143735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.383 [2024-07-15 21:46:49.143764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.383 [2024-07-15 21:46:49.143780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.383 [2024-07-15 21:46:49.149517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.383 [2024-07-15 21:46:49.149556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.383 [2024-07-15 21:46:49.149573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.383 [2024-07-15 21:46:49.154688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.383 [2024-07-15 21:46:49.154718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.383 [2024-07-15 21:46:49.154734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.383 [2024-07-15 21:46:49.161035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.383 [2024-07-15 21:46:49.161090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.383 [2024-07-15 21:46:49.161118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.383 [2024-07-15 21:46:49.167265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.383 [2024-07-15 21:46:49.167296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.383 [2024-07-15 21:46:49.167327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.681 [2024-07-15 21:46:49.174129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.681 [2024-07-15 21:46:49.174178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.681 [2024-07-15 21:46:49.174195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.681 [2024-07-15 21:46:49.179376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.681 [2024-07-15 21:46:49.179408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.681 [2024-07-15 21:46:49.179425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.681 [2024-07-15 21:46:49.184049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.681 [2024-07-15 21:46:49.184079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.681 [2024-07-15 21:46:49.184104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.681 [2024-07-15 21:46:49.187040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.681 [2024-07-15 21:46:49.187069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.681 [2024-07-15 21:46:49.187085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.681 [2024-07-15 21:46:49.192494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.681 [2024-07-15 21:46:49.192525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.681 [2024-07-15 21:46:49.192542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.681 [2024-07-15 21:46:49.197373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.681 [2024-07-15 21:46:49.197402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.681 [2024-07-15 21:46:49.197431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.681 [2024-07-15 21:46:49.201886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.681 [2024-07-15 21:46:49.201914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.681 [2024-07-15 21:46:49.201942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.681 [2024-07-15 21:46:49.206261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.681 [2024-07-15 21:46:49.206297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.681 [2024-07-15 21:46:49.206312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.681 [2024-07-15 21:46:49.210644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.681 [2024-07-15 21:46:49.210672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.681 [2024-07-15 21:46:49.210688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.681 [2024-07-15 21:46:49.215081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.681 [2024-07-15 21:46:49.215109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.681 [2024-07-15 21:46:49.215124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.681 [2024-07-15 21:46:49.219907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.681 [2024-07-15 21:46:49.219936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.681 [2024-07-15 21:46:49.219952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.681 [2024-07-15 21:46:49.224840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.681 [2024-07-15 21:46:49.224870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.681 [2024-07-15 21:46:49.224886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.681 [2024-07-15 21:46:49.229848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.682 [2024-07-15 21:46:49.229878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.682 [2024-07-15 21:46:49.229894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.682 [2024-07-15 21:46:49.234860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.682 [2024-07-15 21:46:49.234890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.682 [2024-07-15 21:46:49.234906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.682 [2024-07-15 21:46:49.239778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.682 [2024-07-15 21:46:49.239808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.682 [2024-07-15 21:46:49.239824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.682 [2024-07-15 21:46:49.244808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.682 [2024-07-15 21:46:49.244838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.682 [2024-07-15 21:46:49.244854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.682 [2024-07-15 21:46:49.249828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.682 [2024-07-15 21:46:49.249857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.682 [2024-07-15 21:46:49.249873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.682 [2024-07-15 21:46:49.254825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.682 [2024-07-15 21:46:49.254853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.682 [2024-07-15 21:46:49.254869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.682 [2024-07-15 21:46:49.259936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.682 [2024-07-15 21:46:49.259965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.682 [2024-07-15 21:46:49.259982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.682 [2024-07-15 21:46:49.264650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.682 [2024-07-15 21:46:49.264680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.682 [2024-07-15 21:46:49.264705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.682 [2024-07-15 21:46:49.269441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.682 [2024-07-15 21:46:49.269470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.682 [2024-07-15 21:46:49.269487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.682 [2024-07-15 21:46:49.274291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.682 [2024-07-15 21:46:49.274320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.682 [2024-07-15 21:46:49.274335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.682 [2024-07-15 21:46:49.279194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.682 [2024-07-15 21:46:49.279222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.682 [2024-07-15 21:46:49.279238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.682 [2024-07-15 21:46:49.284198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.682 [2024-07-15 21:46:49.284227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.682 [2024-07-15 21:46:49.284242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.682 [2024-07-15 21:46:49.289070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.682 [2024-07-15 21:46:49.289099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.682 [2024-07-15 21:46:49.289114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.682 [2024-07-15 21:46:49.293182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.682 [2024-07-15 21:46:49.293211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.682 [2024-07-15 21:46:49.293227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.682 [2024-07-15 21:46:49.296033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.682 [2024-07-15 21:46:49.296060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.682 [2024-07-15 21:46:49.296075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.682 [2024-07-15 21:46:49.300385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.682 [2024-07-15 21:46:49.300413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.682 [2024-07-15 21:46:49.300428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.682 [2024-07-15 21:46:49.305728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.682 [2024-07-15 21:46:49.305762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.682 [2024-07-15 21:46:49.305779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.682 [2024-07-15 21:46:49.311284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.682 [2024-07-15 21:46:49.311312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.682 [2024-07-15 21:46:49.311328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.682 [2024-07-15 21:46:49.318370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.682 [2024-07-15 21:46:49.318398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.682 [2024-07-15 21:46:49.318414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.682 [2024-07-15 21:46:49.325623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.682 [2024-07-15 21:46:49.325651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.682 [2024-07-15 21:46:49.325667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.682 [2024-07-15 21:46:49.333033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.682 [2024-07-15 21:46:49.333061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.682 [2024-07-15 21:46:49.333077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.683 [2024-07-15 21:46:49.340507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.683 [2024-07-15 21:46:49.340535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.683 [2024-07-15 21:46:49.340551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.683 [2024-07-15 21:46:49.347986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.683 [2024-07-15 21:46:49.348013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.683 [2024-07-15 21:46:49.348029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.683 [2024-07-15 21:46:49.355408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.683 [2024-07-15 21:46:49.355435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.683 [2024-07-15 21:46:49.355451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.683 [2024-07-15 21:46:49.362602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.683 [2024-07-15 21:46:49.362629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.683 [2024-07-15 21:46:49.362645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.683 [2024-07-15 21:46:49.370003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.683 [2024-07-15 21:46:49.370030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.683 [2024-07-15 21:46:49.370047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.683 [2024-07-15 21:46:49.377383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.683 [2024-07-15 21:46:49.377411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.683 [2024-07-15 21:46:49.377427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.683 [2024-07-15 21:46:49.384792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.683 [2024-07-15 21:46:49.384820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.683 [2024-07-15 21:46:49.384836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.683 [2024-07-15 21:46:49.392174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.683 [2024-07-15 21:46:49.392202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.683 [2024-07-15 21:46:49.392218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.683 [2024-07-15 21:46:49.400120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.683 [2024-07-15 21:46:49.400153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.683 [2024-07-15 21:46:49.400170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.683 [2024-07-15 21:46:49.408061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.683 [2024-07-15 21:46:49.408090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.683 [2024-07-15 21:46:49.408106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.683 [2024-07-15 21:46:49.415510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.683 [2024-07-15 21:46:49.415553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.683 [2024-07-15 21:46:49.415571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.683 [2024-07-15 21:46:49.423430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.683 [2024-07-15 21:46:49.423459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.683 [2024-07-15 21:46:49.423488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.683 [2024-07-15 21:46:49.431441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.683 [2024-07-15 21:46:49.431469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.683 [2024-07-15 21:46:49.431492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.683 [2024-07-15 21:46:49.439008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.683 [2024-07-15 21:46:49.439043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.683 [2024-07-15 21:46:49.439060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.964 [2024-07-15 21:46:49.445847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.964 [2024-07-15 21:46:49.445896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.964 [2024-07-15 21:46:49.445915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.964 [2024-07-15 21:46:49.450025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.964 [2024-07-15 21:46:49.450056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.964 [2024-07-15 21:46:49.450085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.964 [2024-07-15 21:46:49.457558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.964 [2024-07-15 21:46:49.457588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.964 [2024-07-15 21:46:49.457604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.964 [2024-07-15 21:46:49.465061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.964 [2024-07-15 21:46:49.465089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.964 [2024-07-15 21:46:49.465105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.964 [2024-07-15 21:46:49.472224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.964 [2024-07-15 21:46:49.472252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.964 [2024-07-15 21:46:49.472268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.964 [2024-07-15 21:46:49.479731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.964 [2024-07-15 21:46:49.479761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.964 [2024-07-15 21:46:49.479790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.964 [2024-07-15 21:46:49.487626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.964 [2024-07-15 21:46:49.487667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.964 [2024-07-15 21:46:49.487685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.964 [2024-07-15 21:46:49.495390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.964 [2024-07-15 21:46:49.495438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.964 [2024-07-15 21:46:49.495456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.964 [2024-07-15 21:46:49.502633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.964 [2024-07-15 21:46:49.502661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.964 [2024-07-15 21:46:49.502690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.964 [2024-07-15 21:46:49.509272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.964 [2024-07-15 21:46:49.509300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.964 [2024-07-15 21:46:49.509332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.964 [2024-07-15 21:46:49.516646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.964 [2024-07-15 21:46:49.516674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.964 [2024-07-15 21:46:49.516703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.964 [2024-07-15 21:46:49.523286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.964 [2024-07-15 21:46:49.523315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.965 [2024-07-15 21:46:49.523344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.965 [2024-07-15 21:46:49.529521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.965 [2024-07-15 21:46:49.529550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.965 [2024-07-15 21:46:49.529566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.965 [2024-07-15 21:46:49.534524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.965 [2024-07-15 21:46:49.534552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.965 [2024-07-15 21:46:49.534569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.965 [2024-07-15 21:46:49.539585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.965 [2024-07-15 21:46:49.539612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.965 [2024-07-15 21:46:49.539628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.965 [2024-07-15 21:46:49.544600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.965 [2024-07-15 21:46:49.544627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.965 [2024-07-15 21:46:49.544643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.965 [2024-07-15 21:46:49.550295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.965 [2024-07-15 21:46:49.550322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.965 [2024-07-15 21:46:49.550338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.965 [2024-07-15 21:46:49.555863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.965 [2024-07-15 21:46:49.555891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.965 [2024-07-15 21:46:49.555907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.965 [2024-07-15 21:46:49.561416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.965 [2024-07-15 21:46:49.561443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.965 [2024-07-15 21:46:49.561459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.965 [2024-07-15 21:46:49.566717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.965 [2024-07-15 21:46:49.566744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.965 [2024-07-15 21:46:49.566760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.965 [2024-07-15 21:46:49.572480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.965 [2024-07-15 21:46:49.572508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.965 [2024-07-15 21:46:49.572524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.965 [2024-07-15 21:46:49.577518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.965 [2024-07-15 21:46:49.577548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.965 [2024-07-15 21:46:49.577564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.965 [2024-07-15 21:46:49.582181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.965 [2024-07-15 21:46:49.582209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.965 [2024-07-15 21:46:49.582225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.965 [2024-07-15 21:46:49.586932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.965 [2024-07-15 21:46:49.586959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.965 [2024-07-15 21:46:49.586975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.965 [2024-07-15 21:46:49.591920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.965 [2024-07-15 21:46:49.591956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.965 [2024-07-15 21:46:49.591977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.965 [2024-07-15 21:46:49.595603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.965 [2024-07-15 21:46:49.595630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.965 [2024-07-15 21:46:49.595646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.965 [2024-07-15 21:46:49.599206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.965 [2024-07-15 21:46:49.599232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.965 [2024-07-15 21:46:49.599247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.965 [2024-07-15 21:46:49.603670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.965 [2024-07-15 21:46:49.603697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.965 [2024-07-15 21:46:49.603713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.965 [2024-07-15 21:46:49.608065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.965 [2024-07-15 21:46:49.608092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.965 [2024-07-15 21:46:49.608107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.965 [2024-07-15 21:46:49.612767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.965 [2024-07-15 21:46:49.612794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.965 [2024-07-15 21:46:49.612810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:58.965 [2024-07-15 21:46:49.617558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.965 [2024-07-15 21:46:49.617586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.965 [2024-07-15 21:46:49.617602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:58.965 [2024-07-15 21:46:49.623479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe14540) 00:24:58.965 [2024-07-15 21:46:49.623514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.965 [2024-07-15 21:46:49.623530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:58.965 00:24:58.965 Latency(us) 00:24:58.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.965 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:58.965 nvme0n1 : 2.00 5876.91 734.61 0.00 0.00 2718.62 600.75 8398.32 00:24:58.965 =================================================================================================================== 00:24:58.965 Total : 5876.91 734.61 0.00 0.00 2718.62 600.75 8398.32 00:24:58.965 0 00:24:58.965 21:46:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:58.965 21:46:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:58.965 21:46:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:58.965 | .driver_specific 00:24:58.965 | .nvme_error 00:24:58.965 | .status_code 00:24:58.965 | .command_transient_transport_error' 00:24:58.965 21:46:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:59.223 21:46:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 379 > 0 )) 00:24:59.223 21:46:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 432368 00:24:59.223 21:46:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 432368 ']' 00:24:59.223 21:46:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 432368 00:24:59.223 21:46:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:59.223 21:46:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:59.223 21:46:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 432368 00:24:59.223 21:46:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:59.223 21:46:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:59.223 21:46:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 432368' 00:24:59.223 killing process with pid 432368 00:24:59.223 21:46:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 432368 00:24:59.223 Received shutdown signal, test time was about 2.000000 seconds 00:24:59.223 00:24:59.223 Latency(us) 00:24:59.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.223 =================================================================================================================== 00:24:59.223 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:59.223 21:46:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 432368 00:24:59.480 21:46:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:59.480 21:46:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:59.480 21:46:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:59.480 21:46:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:59.480 21:46:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:59.480 21:46:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=432698 00:24:59.480 21:46:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:59.480 21:46:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 432698 /var/tmp/bperf.sock 00:24:59.480 21:46:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 432698 ']' 00:24:59.480 21:46:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:59.480 21:46:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:59.480 21:46:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:59.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:59.480 21:46:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:59.480 21:46:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:59.480 [2024-07-15 21:46:50.207421] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:24:59.480 [2024-07-15 21:46:50.207512] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid432698 ] 00:24:59.480 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.480 [2024-07-15 21:46:50.264935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.737 [2024-07-15 21:46:50.365348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.737 21:46:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:59.737 21:46:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:59.737 21:46:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:59.737 21:46:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:59.994 21:46:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:59.994 21:46:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.994 21:46:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:59.994 21:46:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.994 21:46:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:59.994 21:46:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:00.559 nvme0n1 00:25:00.559 21:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:00.559 21:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.559 21:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:00.559 21:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.559 21:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:00.559 21:46:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:00.559 Running I/O for 2 seconds... 00:25:00.559 [2024-07-15 21:46:51.221460] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190fa3a0 00:25:00.559 [2024-07-15 21:46:51.222474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.559 [2024-07-15 21:46:51.222515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:00.559 [2024-07-15 21:46:51.233061] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190e6fa8 00:25:00.559 [2024-07-15 21:46:51.233896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.559 [2024-07-15 21:46:51.233926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:00.559 [2024-07-15 21:46:51.247064] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ddc00 00:25:00.559 [2024-07-15 21:46:51.248604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.559 [2024-07-15 21:46:51.248640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:00.559 [2024-07-15 21:46:51.259377] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190e7818 00:25:00.559 [2024-07-15 21:46:51.261028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.559 [2024-07-15 21:46:51.261057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:00.559 [2024-07-15 21:46:51.270167] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190de470 00:25:00.559 [2024-07-15 21:46:51.271951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.559 [2024-07-15 21:46:51.271982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:00.559 [2024-07-15 21:46:51.280220] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190e23b8 00:25:00.559 [2024-07-15 21:46:51.281051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.559 [2024-07-15 21:46:51.281078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:00.559 [2024-07-15 21:46:51.293001] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190fcdd0 00:25:00.559 [2024-07-15 21:46:51.293671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.559 [2024-07-15 21:46:51.293699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:00.559 [2024-07-15 21:46:51.305306] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190efae0 00:25:00.559 [2024-07-15 21:46:51.306130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.559 [2024-07-15 21:46:51.306166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:00.559 [2024-07-15 21:46:51.317728] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190f8e88 00:25:00.559 [2024-07-15 21:46:51.318724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.559 [2024-07-15 21:46:51.318752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:00.559 [2024-07-15 21:46:51.328385] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190f0350 00:25:00.559 [2024-07-15 21:46:51.329614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.559 [2024-07-15 21:46:51.329642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:00.559 [2024-07-15 21:46:51.339946] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190f92c0 00:25:00.559 [2024-07-15 21:46:51.340948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.559 [2024-07-15 21:46:51.340977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:00.817 [2024-07-15 21:46:51.352891] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:00.817 [2024-07-15 21:46:51.353078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.817 [2024-07-15 21:46:51.353106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:00.817 [2024-07-15 21:46:51.365808] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:00.817 [2024-07-15 21:46:51.365986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.817 [2024-07-15 21:46:51.366015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:00.817 [2024-07-15 21:46:51.378689] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:00.817 [2024-07-15 21:46:51.378865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.817 [2024-07-15 21:46:51.378893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:00.817 [2024-07-15 21:46:51.391558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:00.817 [2024-07-15 21:46:51.391741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.818 [2024-07-15 21:46:51.391767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:00.818 [2024-07-15 21:46:51.404151] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:00.818 [2024-07-15 21:46:51.404331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.818 [2024-07-15 21:46:51.404359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:00.818 [2024-07-15 21:46:51.416758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:00.818 [2024-07-15 21:46:51.416926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.818 [2024-07-15 21:46:51.416955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:00.818 [2024-07-15 21:46:51.429349] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:00.818 [2024-07-15 21:46:51.429521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.818 [2024-07-15 21:46:51.429549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:00.818 [2024-07-15 21:46:51.441935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:00.818 [2024-07-15 21:46:51.442104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.818 [2024-07-15 21:46:51.442152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:00.818 [2024-07-15 21:46:51.454607] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:00.818 [2024-07-15 21:46:51.454777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.818 [2024-07-15 21:46:51.454804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:00.818 [2024-07-15 21:46:51.467216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:00.818 [2024-07-15 21:46:51.467387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.818 [2024-07-15 21:46:51.467412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:00.818 [2024-07-15 21:46:51.479784] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:00.818 [2024-07-15 21:46:51.479957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.818 [2024-07-15 21:46:51.479983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:00.818 [2024-07-15 21:46:51.492435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:00.818 [2024-07-15 21:46:51.492605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.818 [2024-07-15 21:46:51.492633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:00.818 [2024-07-15 21:46:51.505227] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:00.818 [2024-07-15 21:46:51.505406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.818 [2024-07-15 21:46:51.505432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:00.818 [2024-07-15 21:46:51.517746] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:00.818 [2024-07-15 21:46:51.517913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.818 [2024-07-15 21:46:51.517939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:00.818 [2024-07-15 21:46:51.530286] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:00.818 [2024-07-15 21:46:51.530456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.818 [2024-07-15 21:46:51.530482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:00.818 [2024-07-15 21:46:51.542822] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:00.818 [2024-07-15 21:46:51.542991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.818 [2024-07-15 21:46:51.543016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:00.818 [2024-07-15 21:46:51.555342] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:00.818 [2024-07-15 21:46:51.555511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.818 [2024-07-15 21:46:51.555537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:00.818 [2024-07-15 21:46:51.567911] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:00.818 [2024-07-15 21:46:51.568079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.818 [2024-07-15 21:46:51.568110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:00.818 [2024-07-15 21:46:51.580420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:00.818 [2024-07-15 21:46:51.580586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.818 [2024-07-15 21:46:51.580612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:00.818 [2024-07-15 21:46:51.592932] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:00.818 [2024-07-15 21:46:51.593103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.818 [2024-07-15 21:46:51.593148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:00.818 [2024-07-15 21:46:51.605481] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:00.818 [2024-07-15 21:46:51.605653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:00.818 [2024-07-15 21:46:51.605679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.076 [2024-07-15 21:46:51.618460] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.076 [2024-07-15 21:46:51.618639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.076 [2024-07-15 21:46:51.618666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.076 [2024-07-15 21:46:51.631253] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.076 [2024-07-15 21:46:51.631425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.076 [2024-07-15 21:46:51.631451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.076 [2024-07-15 21:46:51.643847] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.076 [2024-07-15 21:46:51.644017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.076 [2024-07-15 21:46:51.644043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.076 [2024-07-15 21:46:51.656386] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.076 [2024-07-15 21:46:51.656555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.076 [2024-07-15 21:46:51.656581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.076 [2024-07-15 21:46:51.668912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.076 [2024-07-15 21:46:51.669079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.076 [2024-07-15 21:46:51.669104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.076 [2024-07-15 21:46:51.681452] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.076 [2024-07-15 21:46:51.681633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.076 [2024-07-15 21:46:51.681659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.076 [2024-07-15 21:46:51.693995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.076 [2024-07-15 21:46:51.694167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.076 [2024-07-15 21:46:51.694193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.076 [2024-07-15 21:46:51.706530] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.076 [2024-07-15 21:46:51.706698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.076 [2024-07-15 21:46:51.706723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.076 [2024-07-15 21:46:51.719002] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.076 [2024-07-15 21:46:51.719173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.076 [2024-07-15 21:46:51.719199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.076 [2024-07-15 21:46:51.731510] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.076 [2024-07-15 21:46:51.731679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.076 [2024-07-15 21:46:51.731708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.076 [2024-07-15 21:46:51.744050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.076 [2024-07-15 21:46:51.744229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.076 [2024-07-15 21:46:51.744256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.076 [2024-07-15 21:46:51.756653] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.077 [2024-07-15 21:46:51.756821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.077 [2024-07-15 21:46:51.756847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.077 [2024-07-15 21:46:51.769179] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.077 [2024-07-15 21:46:51.769363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.077 [2024-07-15 21:46:51.769389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.077 [2024-07-15 21:46:51.781759] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.077 [2024-07-15 21:46:51.781928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.077 [2024-07-15 21:46:51.781954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.077 [2024-07-15 21:46:51.794261] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.077 [2024-07-15 21:46:51.794437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.077 [2024-07-15 21:46:51.794462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.077 [2024-07-15 21:46:51.806762] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.077 [2024-07-15 21:46:51.806931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.077 [2024-07-15 21:46:51.806957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.077 [2024-07-15 21:46:51.819280] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.077 [2024-07-15 21:46:51.819450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.077 [2024-07-15 21:46:51.819476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.077 [2024-07-15 21:46:51.831747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.077 [2024-07-15 21:46:51.831918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.077 [2024-07-15 21:46:51.831943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.077 [2024-07-15 21:46:51.844249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.077 [2024-07-15 21:46:51.844419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.077 [2024-07-15 21:46:51.844445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.077 [2024-07-15 21:46:51.856744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.077 [2024-07-15 21:46:51.856916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.077 [2024-07-15 21:46:51.856942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.335 [2024-07-15 21:46:51.869591] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.335 [2024-07-15 21:46:51.869766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.335 [2024-07-15 21:46:51.869792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.335 [2024-07-15 21:46:51.882414] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.335 [2024-07-15 21:46:51.882590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.335 [2024-07-15 21:46:51.882616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.335 [2024-07-15 21:46:51.895148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.335 [2024-07-15 21:46:51.895331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.335 [2024-07-15 21:46:51.895362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.335 [2024-07-15 21:46:51.907660] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.335 [2024-07-15 21:46:51.907828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.335 [2024-07-15 21:46:51.907854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.335 [2024-07-15 21:46:51.920181] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.335 [2024-07-15 21:46:51.920354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.335 [2024-07-15 21:46:51.920380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.335 [2024-07-15 21:46:51.932675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.335 [2024-07-15 21:46:51.932846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.335 [2024-07-15 21:46:51.932872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.335 [2024-07-15 21:46:51.945242] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.335 [2024-07-15 21:46:51.945416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.335 [2024-07-15 21:46:51.945444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.335 [2024-07-15 21:46:51.957795] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.335 [2024-07-15 21:46:51.957969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.335 [2024-07-15 21:46:51.958000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.335 [2024-07-15 21:46:51.970522] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.335 [2024-07-15 21:46:51.970696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.335 [2024-07-15 21:46:51.970722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.335 [2024-07-15 21:46:51.983200] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.335 [2024-07-15 21:46:51.983371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.335 [2024-07-15 21:46:51.983397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.335 [2024-07-15 21:46:51.995852] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.335 [2024-07-15 21:46:51.996024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.335 [2024-07-15 21:46:51.996052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.335 [2024-07-15 21:46:52.008451] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.335 [2024-07-15 21:46:52.008632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.335 [2024-07-15 21:46:52.008658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.335 [2024-07-15 21:46:52.021076] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.335 [2024-07-15 21:46:52.021256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.335 [2024-07-15 21:46:52.021283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.335 [2024-07-15 21:46:52.033721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.335 [2024-07-15 21:46:52.033890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.335 [2024-07-15 21:46:52.033916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.335 [2024-07-15 21:46:52.046441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.335 [2024-07-15 21:46:52.046614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.335 [2024-07-15 21:46:52.046640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.335 [2024-07-15 21:46:52.059067] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.335 [2024-07-15 21:46:52.059245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.335 [2024-07-15 21:46:52.059272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.335 [2024-07-15 21:46:52.071729] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.335 [2024-07-15 21:46:52.071900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.335 [2024-07-15 21:46:52.071926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.335 [2024-07-15 21:46:52.084396] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.335 [2024-07-15 21:46:52.084568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.335 [2024-07-15 21:46:52.084594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.335 [2024-07-15 21:46:52.097043] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.335 [2024-07-15 21:46:52.097248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.335 [2024-07-15 21:46:52.097275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.335 [2024-07-15 21:46:52.109698] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.335 [2024-07-15 21:46:52.109872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.335 [2024-07-15 21:46:52.109898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.335 [2024-07-15 21:46:52.122362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.335 [2024-07-15 21:46:52.122535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.335 [2024-07-15 21:46:52.122561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.593 [2024-07-15 21:46:52.135470] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.593 [2024-07-15 21:46:52.135648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.593 [2024-07-15 21:46:52.135675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.593 [2024-07-15 21:46:52.148376] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.593 [2024-07-15 21:46:52.148547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.593 [2024-07-15 21:46:52.148573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.593 [2024-07-15 21:46:52.160949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.593 [2024-07-15 21:46:52.161121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.593 [2024-07-15 21:46:52.161154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.593 [2024-07-15 21:46:52.173722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.593 [2024-07-15 21:46:52.173900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.593 [2024-07-15 21:46:52.173931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.593 [2024-07-15 21:46:52.186738] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.593 [2024-07-15 21:46:52.186930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.593 [2024-07-15 21:46:52.186962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.593 [2024-07-15 21:46:52.200017] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.593 [2024-07-15 21:46:52.200213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.593 [2024-07-15 21:46:52.200241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.593 [2024-07-15 21:46:52.213023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.593 [2024-07-15 21:46:52.213211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.593 [2024-07-15 21:46:52.213248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.593 [2024-07-15 21:46:52.225668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.593 [2024-07-15 21:46:52.225840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.593 [2024-07-15 21:46:52.225871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.593 [2024-07-15 21:46:52.238271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.593 [2024-07-15 21:46:52.238444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.593 [2024-07-15 21:46:52.238473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.593 [2024-07-15 21:46:52.250967] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.593 [2024-07-15 21:46:52.251144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.594 [2024-07-15 21:46:52.251173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.594 [2024-07-15 21:46:52.263574] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.594 [2024-07-15 21:46:52.263743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.594 [2024-07-15 21:46:52.263771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.594 [2024-07-15 21:46:52.276164] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.594 [2024-07-15 21:46:52.276336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.594 [2024-07-15 21:46:52.276362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.594 [2024-07-15 21:46:52.288730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.594 [2024-07-15 21:46:52.288900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.594 [2024-07-15 21:46:52.288927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.594 [2024-07-15 21:46:52.301367] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.594 [2024-07-15 21:46:52.301537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.594 [2024-07-15 21:46:52.301564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.594 [2024-07-15 21:46:52.313979] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.594 [2024-07-15 21:46:52.314153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.594 [2024-07-15 21:46:52.314180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.594 [2024-07-15 21:46:52.326646] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.594 [2024-07-15 21:46:52.326818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.594 [2024-07-15 21:46:52.326846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.594 [2024-07-15 21:46:52.339214] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.594 [2024-07-15 21:46:52.339392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.594 [2024-07-15 21:46:52.339420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.594 [2024-07-15 21:46:52.351765] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.594 [2024-07-15 21:46:52.351933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.594 [2024-07-15 21:46:52.351961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.594 [2024-07-15 21:46:52.364330] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.594 [2024-07-15 21:46:52.364500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.594 [2024-07-15 21:46:52.364526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.594 [2024-07-15 21:46:52.376932] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.594 [2024-07-15 21:46:52.377105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.594 [2024-07-15 21:46:52.377156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.852 [2024-07-15 21:46:52.389982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.852 [2024-07-15 21:46:52.390165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.852 [2024-07-15 21:46:52.390196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.853 [2024-07-15 21:46:52.402925] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.853 [2024-07-15 21:46:52.403102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.853 [2024-07-15 21:46:52.403133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.853 [2024-07-15 21:46:52.415901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.853 [2024-07-15 21:46:52.416076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.853 [2024-07-15 21:46:52.416107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.853 [2024-07-15 21:46:52.428715] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.853 [2024-07-15 21:46:52.428904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.853 [2024-07-15 21:46:52.428933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.853 [2024-07-15 21:46:52.441420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.853 [2024-07-15 21:46:52.441600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.853 [2024-07-15 21:46:52.441631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.853 [2024-07-15 21:46:52.454112] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.853 [2024-07-15 21:46:52.454301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.853 [2024-07-15 21:46:52.454332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.853 [2024-07-15 21:46:52.466815] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.853 [2024-07-15 21:46:52.466990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.853 [2024-07-15 21:46:52.467022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.853 [2024-07-15 21:46:52.479483] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.853 [2024-07-15 21:46:52.479654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.853 [2024-07-15 21:46:52.479685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.853 [2024-07-15 21:46:52.492177] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.853 [2024-07-15 21:46:52.492349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.853 [2024-07-15 21:46:52.492377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.853 [2024-07-15 21:46:52.505093] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.853 [2024-07-15 21:46:52.505282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.853 [2024-07-15 21:46:52.505315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.853 [2024-07-15 21:46:52.517667] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.853 [2024-07-15 21:46:52.517842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.853 [2024-07-15 21:46:52.517869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.853 [2024-07-15 21:46:52.530289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.853 [2024-07-15 21:46:52.530463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.853 [2024-07-15 21:46:52.530494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.853 [2024-07-15 21:46:52.542875] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.853 [2024-07-15 21:46:52.543059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.853 [2024-07-15 21:46:52.543085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.853 [2024-07-15 21:46:52.555528] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.853 [2024-07-15 21:46:52.555698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.853 [2024-07-15 21:46:52.555737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.853 [2024-07-15 21:46:52.568202] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.853 [2024-07-15 21:46:52.568376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.853 [2024-07-15 21:46:52.568401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.853 [2024-07-15 21:46:52.580837] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.853 [2024-07-15 21:46:52.581004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.853 [2024-07-15 21:46:52.581033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.853 [2024-07-15 21:46:52.593468] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.853 [2024-07-15 21:46:52.593639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.853 [2024-07-15 21:46:52.593667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.853 [2024-07-15 21:46:52.606054] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.853 [2024-07-15 21:46:52.606233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.853 [2024-07-15 21:46:52.606262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.853 [2024-07-15 21:46:52.618637] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.853 [2024-07-15 21:46:52.618810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.853 [2024-07-15 21:46:52.618839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.853 [2024-07-15 21:46:52.631284] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.853 [2024-07-15 21:46:52.631458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.853 [2024-07-15 21:46:52.631488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:01.853 [2024-07-15 21:46:52.644105] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:01.853 [2024-07-15 21:46:52.644334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:01.853 [2024-07-15 21:46:52.644374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.111 [2024-07-15 21:46:52.657155] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.111 [2024-07-15 21:46:52.657328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.111 [2024-07-15 21:46:52.657356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.111 [2024-07-15 21:46:52.670057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.111 [2024-07-15 21:46:52.670252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.111 [2024-07-15 21:46:52.670297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.111 [2024-07-15 21:46:52.682808] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.111 [2024-07-15 21:46:52.682983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.111 [2024-07-15 21:46:52.683010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.111 [2024-07-15 21:46:52.695361] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.111 [2024-07-15 21:46:52.695531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.111 [2024-07-15 21:46:52.695560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.111 [2024-07-15 21:46:52.707891] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.111 [2024-07-15 21:46:52.708060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.111 [2024-07-15 21:46:52.708087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.111 [2024-07-15 21:46:52.720561] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.111 [2024-07-15 21:46:52.720732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.111 [2024-07-15 21:46:52.720761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.111 [2024-07-15 21:46:52.733058] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.111 [2024-07-15 21:46:52.733267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.111 [2024-07-15 21:46:52.733297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.111 [2024-07-15 21:46:52.745590] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.111 [2024-07-15 21:46:52.745761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.111 [2024-07-15 21:46:52.745790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.111 [2024-07-15 21:46:52.758264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.111 [2024-07-15 21:46:52.758434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.111 [2024-07-15 21:46:52.758462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.111 [2024-07-15 21:46:52.770796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.111 [2024-07-15 21:46:52.770964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.111 [2024-07-15 21:46:52.770992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.111 [2024-07-15 21:46:52.783357] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.111 [2024-07-15 21:46:52.783533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.111 [2024-07-15 21:46:52.783561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.111 [2024-07-15 21:46:52.795919] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.111 [2024-07-15 21:46:52.796091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.111 [2024-07-15 21:46:52.796120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.111 [2024-07-15 21:46:52.808455] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.111 [2024-07-15 21:46:52.808623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.111 [2024-07-15 21:46:52.808650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.111 [2024-07-15 21:46:52.820942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.112 [2024-07-15 21:46:52.821110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.112 [2024-07-15 21:46:52.821161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.112 [2024-07-15 21:46:52.833493] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.112 [2024-07-15 21:46:52.833659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.112 [2024-07-15 21:46:52.833687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.112 [2024-07-15 21:46:52.845976] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.112 [2024-07-15 21:46:52.846154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.112 [2024-07-15 21:46:52.846181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.112 [2024-07-15 21:46:52.858433] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.112 [2024-07-15 21:46:52.858601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.112 [2024-07-15 21:46:52.858626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.112 [2024-07-15 21:46:52.870933] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.112 [2024-07-15 21:46:52.871107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.112 [2024-07-15 21:46:52.871134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.112 [2024-07-15 21:46:52.883466] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.112 [2024-07-15 21:46:52.883636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.112 [2024-07-15 21:46:52.883665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.112 [2024-07-15 21:46:52.896002] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.112 [2024-07-15 21:46:52.896186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.112 [2024-07-15 21:46:52.896216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 21:46:52.908973] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.370 [2024-07-15 21:46:52.909152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 21:46:52.909180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 21:46:52.921803] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.370 [2024-07-15 21:46:52.921975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 21:46:52.922004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 21:46:52.934620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.370 [2024-07-15 21:46:52.934789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 21:46:52.934818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 21:46:52.947146] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.370 [2024-07-15 21:46:52.947332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 21:46:52.947361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 21:46:52.959662] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.370 [2024-07-15 21:46:52.959833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 21:46:52.959861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 21:46:52.972230] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.370 [2024-07-15 21:46:52.972398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 21:46:52.972426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 21:46:52.984747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.370 [2024-07-15 21:46:52.984917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 21:46:52.984945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 21:46:52.997275] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.370 [2024-07-15 21:46:52.997448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 21:46:52.997494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 21:46:53.009832] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.370 [2024-07-15 21:46:53.010000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 21:46:53.010028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 21:46:53.022374] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.370 [2024-07-15 21:46:53.022544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 21:46:53.022573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 21:46:53.034867] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.370 [2024-07-15 21:46:53.035036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 21:46:53.035065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 21:46:53.047428] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.370 [2024-07-15 21:46:53.047600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 21:46:53.047630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 21:46:53.059919] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.370 [2024-07-15 21:46:53.060090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 21:46:53.060131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 21:46:53.072469] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.370 [2024-07-15 21:46:53.072641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 21:46:53.072671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 21:46:53.085016] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.370 [2024-07-15 21:46:53.085195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 21:46:53.085224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 21:46:53.097533] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.370 [2024-07-15 21:46:53.097702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 21:46:53.097731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 21:46:53.110063] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.370 [2024-07-15 21:46:53.110251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 21:46:53.110278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 21:46:53.122626] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.370 [2024-07-15 21:46:53.122801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 21:46:53.122829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 21:46:53.135218] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.370 [2024-07-15 21:46:53.135387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 21:46:53.135416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 21:46:53.147779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.370 [2024-07-15 21:46:53.147949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 21:46:53.147977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.370 [2024-07-15 21:46:53.160471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.370 [2024-07-15 21:46:53.160645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.370 [2024-07-15 21:46:53.160675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.627 [2024-07-15 21:46:53.173441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.627 [2024-07-15 21:46:53.173618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.627 [2024-07-15 21:46:53.173648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.627 [2024-07-15 21:46:53.186324] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.627 [2024-07-15 21:46:53.186499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.627 [2024-07-15 21:46:53.186527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.627 [2024-07-15 21:46:53.198915] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.627 [2024-07-15 21:46:53.199088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.627 [2024-07-15 21:46:53.199115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.627 [2024-07-15 21:46:53.211506] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xac7e10) with pdu=0x2000190ed0b0 00:25:02.627 [2024-07-15 21:46:53.211678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.627 [2024-07-15 21:46:53.211706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:02.627 00:25:02.627 Latency(us) 00:25:02.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.627 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:02.627 nvme0n1 : 2.01 20279.89 79.22 0.00 0.00 6297.99 2293.76 15437.37 00:25:02.627 =================================================================================================================== 00:25:02.627 Total : 20279.89 79.22 0.00 0.00 6297.99 2293.76 15437.37 00:25:02.627 0 00:25:02.627 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:02.627 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:02.627 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:02.627 | .driver_specific 00:25:02.627 | .nvme_error 00:25:02.627 | .status_code 00:25:02.627 | .command_transient_transport_error' 00:25:02.627 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:02.884 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 159 > 0 )) 00:25:02.884 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 432698 00:25:02.884 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 432698 ']' 00:25:02.884 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 432698 00:25:02.884 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:02.884 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:02.884 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 432698 00:25:02.884 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:02.884 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:02.884 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 432698' 00:25:02.884 killing process with pid 432698 00:25:02.884 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 432698 00:25:02.884 Received shutdown signal, test time was about 2.000000 seconds 00:25:02.884 00:25:02.884 Latency(us) 00:25:02.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.884 =================================================================================================================== 00:25:02.884 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:02.884 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 432698 00:25:03.141 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:03.141 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:03.141 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:03.141 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:03.141 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:03.141 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=433098 00:25:03.141 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:03.141 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 433098 /var/tmp/bperf.sock 00:25:03.141 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 433098 ']' 00:25:03.141 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:03.141 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:03.141 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:03.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:03.141 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:03.141 21:46:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:03.141 [2024-07-15 21:46:53.793371] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:25:03.141 [2024-07-15 21:46:53.793471] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid433098 ] 00:25:03.141 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:03.141 Zero copy mechanism will not be used. 00:25:03.141 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.141 [2024-07-15 21:46:53.849036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.398 [2024-07-15 21:46:53.955846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.398 21:46:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:03.398 21:46:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:03.398 21:46:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:03.398 21:46:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:03.655 21:46:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:03.655 21:46:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.655 21:46:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:03.655 21:46:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.655 21:46:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:03.655 21:46:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:03.912 nvme0n1 00:25:04.169 21:46:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:04.169 21:46:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.169 21:46:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:04.169 21:46:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.169 21:46:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:04.169 21:46:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:04.169 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:04.169 Zero copy mechanism will not be used. 00:25:04.169 Running I/O for 2 seconds... 00:25:04.169 [2024-07-15 21:46:54.839454] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.170 [2024-07-15 21:46:54.839760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.170 [2024-07-15 21:46:54.839805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.170 [2024-07-15 21:46:54.844089] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.170 [2024-07-15 21:46:54.844379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.170 [2024-07-15 21:46:54.844409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.170 [2024-07-15 21:46:54.849150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.170 [2024-07-15 21:46:54.849428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.170 [2024-07-15 21:46:54.849456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.170 [2024-07-15 21:46:54.854147] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.170 [2024-07-15 21:46:54.854424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.170 [2024-07-15 21:46:54.854452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.170 [2024-07-15 21:46:54.859614] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.170 [2024-07-15 21:46:54.859880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.170 [2024-07-15 21:46:54.859907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.170 [2024-07-15 21:46:54.866349] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.170 [2024-07-15 21:46:54.866629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.170 [2024-07-15 21:46:54.866657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.170 [2024-07-15 21:46:54.872189] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.170 [2024-07-15 21:46:54.872465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.170 [2024-07-15 21:46:54.872494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.170 [2024-07-15 21:46:54.878700] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.170 [2024-07-15 21:46:54.878991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.170 [2024-07-15 21:46:54.879019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.170 [2024-07-15 21:46:54.885277] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.170 [2024-07-15 21:46:54.885553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.170 [2024-07-15 21:46:54.885581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.170 [2024-07-15 21:46:54.891675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.170 [2024-07-15 21:46:54.892019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.170 [2024-07-15 21:46:54.892047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.170 [2024-07-15 21:46:54.898339] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.170 [2024-07-15 21:46:54.898620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.170 [2024-07-15 21:46:54.898648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.170 [2024-07-15 21:46:54.904327] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.170 [2024-07-15 21:46:54.904600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.170 [2024-07-15 21:46:54.904631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.170 [2024-07-15 21:46:54.908863] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.170 [2024-07-15 21:46:54.909106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.170 [2024-07-15 21:46:54.909134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.170 [2024-07-15 21:46:54.913186] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.170 [2024-07-15 21:46:54.913450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.170 [2024-07-15 21:46:54.913478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.170 [2024-07-15 21:46:54.917772] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.170 [2024-07-15 21:46:54.918020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.170 [2024-07-15 21:46:54.918052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.170 [2024-07-15 21:46:54.922295] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.170 [2024-07-15 21:46:54.922542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.170 [2024-07-15 21:46:54.922569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.170 [2024-07-15 21:46:54.926670] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.170 [2024-07-15 21:46:54.926918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.170 [2024-07-15 21:46:54.926945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.170 [2024-07-15 21:46:54.931015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.170 [2024-07-15 21:46:54.931272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.170 [2024-07-15 21:46:54.931299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.170 [2024-07-15 21:46:54.935332] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.170 [2024-07-15 21:46:54.935578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.170 [2024-07-15 21:46:54.935605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.170 [2024-07-15 21:46:54.940598] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.170 [2024-07-15 21:46:54.940868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.170 [2024-07-15 21:46:54.940894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.170 [2024-07-15 21:46:54.946337] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.170 [2024-07-15 21:46:54.946583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.170 [2024-07-15 21:46:54.946610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.170 [2024-07-15 21:46:54.951002] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.170 [2024-07-15 21:46:54.951257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.170 [2024-07-15 21:46:54.951288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.170 [2024-07-15 21:46:54.956323] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.170 [2024-07-15 21:46:54.956569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.170 [2024-07-15 21:46:54.956596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.170 [2024-07-15 21:46:54.960780] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.170 [2024-07-15 21:46:54.961061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.170 [2024-07-15 21:46:54.961088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.429 [2024-07-15 21:46:54.965305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.429 [2024-07-15 21:46:54.965562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.429 [2024-07-15 21:46:54.965590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.429 [2024-07-15 21:46:54.969814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.429 [2024-07-15 21:46:54.970066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.429 [2024-07-15 21:46:54.970093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.429 [2024-07-15 21:46:54.974583] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.429 [2024-07-15 21:46:54.974838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.429 [2024-07-15 21:46:54.974877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.429 [2024-07-15 21:46:54.979469] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.429 [2024-07-15 21:46:54.979716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.429 [2024-07-15 21:46:54.979743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.429 [2024-07-15 21:46:54.984185] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.429 [2024-07-15 21:46:54.984434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.429 [2024-07-15 21:46:54.984462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.429 [2024-07-15 21:46:54.988815] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.429 [2024-07-15 21:46:54.989076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.429 [2024-07-15 21:46:54.989104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.429 [2024-07-15 21:46:54.993162] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.429 [2024-07-15 21:46:54.993424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.429 [2024-07-15 21:46:54.993453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.429 [2024-07-15 21:46:54.998461] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.429 [2024-07-15 21:46:54.998751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.429 [2024-07-15 21:46:54.998780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.429 [2024-07-15 21:46:55.004042] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.429 [2024-07-15 21:46:55.004297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.429 [2024-07-15 21:46:55.004325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.429 [2024-07-15 21:46:55.008438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.429 [2024-07-15 21:46:55.008684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.429 [2024-07-15 21:46:55.008712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.429 [2024-07-15 21:46:55.013009] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.429 [2024-07-15 21:46:55.013263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.429 [2024-07-15 21:46:55.013291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.429 [2024-07-15 21:46:55.017530] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.429 [2024-07-15 21:46:55.017792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.429 [2024-07-15 21:46:55.017821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.429 [2024-07-15 21:46:55.021764] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.429 [2024-07-15 21:46:55.022029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.429 [2024-07-15 21:46:55.022056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.429 [2024-07-15 21:46:55.025985] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.429 [2024-07-15 21:46:55.026240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.429 [2024-07-15 21:46:55.026270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.429 [2024-07-15 21:46:55.030279] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.429 [2024-07-15 21:46:55.030524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.429 [2024-07-15 21:46:55.030552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.429 [2024-07-15 21:46:55.034547] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.429 [2024-07-15 21:46:55.034796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.429 [2024-07-15 21:46:55.034824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.429 [2024-07-15 21:46:55.038823] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.429 [2024-07-15 21:46:55.039066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.429 [2024-07-15 21:46:55.039092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.429 [2024-07-15 21:46:55.043048] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.429 [2024-07-15 21:46:55.043299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.429 [2024-07-15 21:46:55.043328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.429 [2024-07-15 21:46:55.047913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.429 [2024-07-15 21:46:55.048165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.429 [2024-07-15 21:46:55.048192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.429 [2024-07-15 21:46:55.053016] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.429 [2024-07-15 21:46:55.053271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.429 [2024-07-15 21:46:55.053299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.429 [2024-07-15 21:46:55.058957] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.059247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.059274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.065065] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.065351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.065378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.071806] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.072092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.072120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.078108] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.078386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.078416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.084046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.084311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.084341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.090008] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.090293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.090319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.095988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.096309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.096339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.102012] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.102305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.102334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.107982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.108299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.108333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.114010] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.114310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.114339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.119746] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.119934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.119960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.125363] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.125619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.125647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.131052] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.131331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.131358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.136971] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.137308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.137335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.142433] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.142758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.142785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.147963] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.148160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.148198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.153630] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.153917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.153945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.159526] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.159758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.159786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.165384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.165623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.165651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.170895] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.171203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.171231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.176846] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.177166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.177193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.182733] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.183008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.183038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.188386] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.188645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.188672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.194080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.194362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.194390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.199802] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.200084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.200111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.205550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.205823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.205850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.211751] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.430 [2024-07-15 21:46:55.212035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.430 [2024-07-15 21:46:55.212063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.430 [2024-07-15 21:46:55.217356] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.431 [2024-07-15 21:46:55.217587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.431 [2024-07-15 21:46:55.217616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.222074] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.222314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.690 [2024-07-15 21:46:55.222343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.226358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.226585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.690 [2024-07-15 21:46:55.226614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.230668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.230901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.690 [2024-07-15 21:46:55.230929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.234646] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.234873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.690 [2024-07-15 21:46:55.234901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.238613] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.238837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.690 [2024-07-15 21:46:55.238865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.242840] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.243074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.690 [2024-07-15 21:46:55.243102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.247253] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.247470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.690 [2024-07-15 21:46:55.247504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.251505] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.251726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.690 [2024-07-15 21:46:55.251754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.255897] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.256115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.690 [2024-07-15 21:46:55.256149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.260074] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.260283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.690 [2024-07-15 21:46:55.260321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.264361] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.264570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.690 [2024-07-15 21:46:55.264594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.268562] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.268768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.690 [2024-07-15 21:46:55.268793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.273517] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.273763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.690 [2024-07-15 21:46:55.273790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.278901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.279174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.690 [2024-07-15 21:46:55.279202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.284874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.285168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.690 [2024-07-15 21:46:55.285195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.290075] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.290361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.690 [2024-07-15 21:46:55.290389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.295170] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.295441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.690 [2024-07-15 21:46:55.295468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.300311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.300672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.690 [2024-07-15 21:46:55.300702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.304738] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.304946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.690 [2024-07-15 21:46:55.304974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.308648] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.308855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.690 [2024-07-15 21:46:55.308883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.312579] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.312784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.690 [2024-07-15 21:46:55.312811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.316607] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.316811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.690 [2024-07-15 21:46:55.316838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.321383] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.321585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.690 [2024-07-15 21:46:55.321613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.325524] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.325726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.690 [2024-07-15 21:46:55.325753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.690 [2024-07-15 21:46:55.330165] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.690 [2024-07-15 21:46:55.330444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.330471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.335260] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.335521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.335548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.341065] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.341369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.341396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.346372] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.346590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.346628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.352012] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.352314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.352344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.357524] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.357735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.357764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.361736] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.361947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.361975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.365706] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.365914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.365942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.369786] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.370021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.370054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.373825] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.374035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.374062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.377734] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.377935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.377962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.381584] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.381788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.381815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.385525] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.385726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.385752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.390494] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.390775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.390802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.395588] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.395851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.395878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.401326] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.401603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.401630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.406689] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.406995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.407021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.412791] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.413026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.413052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.418283] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.418489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.418516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.424121] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.424409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.424435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.429845] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.430127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.430163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.435624] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.435855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.435881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.441325] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.441583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.441610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.447407] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.447691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.447717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.453042] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.453300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.453327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.458851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.459158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.459184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.464739] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.465036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.465062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.470367] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.470585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.470614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.691 [2024-07-15 21:46:55.476235] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.691 [2024-07-15 21:46:55.476542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.691 [2024-07-15 21:46:55.476568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.950 [2024-07-15 21:46:55.482150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.482384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.482412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.487341] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.487611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.487638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.492322] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.492542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.492571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.496759] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.496965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.496992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.501836] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.502051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.502078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.507361] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.507575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.507608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.513019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.513230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.513256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.518565] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.518753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.518781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.523391] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.523577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.523605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.527627] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.527820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.527846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.531679] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.531864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.531891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.535480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.535676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.535703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.539577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.539777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.539803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.543901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.544098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.544125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.548161] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.548350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.548377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.552415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.552607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.552634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.556474] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.556675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.556702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.560368] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.560562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.560588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.564296] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.564483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.564509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.568717] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.568912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.568941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.572811] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.572999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.573026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.577046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.577245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.577271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.580858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.581053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.581084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.584679] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.584870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.584896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.588461] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.588654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.588680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.592287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.592515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.592551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.596329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.596578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.596618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.600267] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.600474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.600504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.604233] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.951 [2024-07-15 21:46:55.604439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.951 [2024-07-15 21:46:55.604467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.951 [2024-07-15 21:46:55.608220] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.608421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.608448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.612233] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.612440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.612467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.616183] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.616384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.616411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.620234] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.620435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.620462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.624354] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.624554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.624581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.628639] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.628839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.628866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.632496] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.632690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.632716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.636284] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.636478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.636504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.640105] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.640307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.640333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.643923] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.644123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.644155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.647755] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.647961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.647987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.651797] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.651994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.652021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.656432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.656752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.656778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.661537] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.661816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.661843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.667176] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.667440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.667467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.672150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.672347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.672374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.676194] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.676391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.676418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.680195] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.680396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.680422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.685361] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.685563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.685591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.689208] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.689405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.689438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.693055] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.693251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.693278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.696902] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.697097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.697123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.700702] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.700897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.700924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.704876] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.705069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.705095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.709590] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.709833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.709860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.714751] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.715066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.715093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.720366] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.720650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.720678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.725694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.725906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.725933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.952 [2024-07-15 21:46:55.731833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.952 [2024-07-15 21:46:55.732078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.952 [2024-07-15 21:46:55.732105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:04.953 [2024-07-15 21:46:55.737463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:04.953 [2024-07-15 21:46:55.737686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.953 [2024-07-15 21:46:55.737713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.212 [2024-07-15 21:46:55.742636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.212 [2024-07-15 21:46:55.742894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.212 [2024-07-15 21:46:55.742936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.212 [2024-07-15 21:46:55.747040] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.212 [2024-07-15 21:46:55.747259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.212 [2024-07-15 21:46:55.747286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.212 [2024-07-15 21:46:55.752148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.212 [2024-07-15 21:46:55.752378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.212 [2024-07-15 21:46:55.752406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.212 [2024-07-15 21:46:55.757687] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.212 [2024-07-15 21:46:55.757972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.212 [2024-07-15 21:46:55.758000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.212 [2024-07-15 21:46:55.763145] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.212 [2024-07-15 21:46:55.763430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.212 [2024-07-15 21:46:55.763457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.212 [2024-07-15 21:46:55.768200] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.212 [2024-07-15 21:46:55.768497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.212 [2024-07-15 21:46:55.768524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.212 [2024-07-15 21:46:55.773358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.212 [2024-07-15 21:46:55.773601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.212 [2024-07-15 21:46:55.773627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.212 [2024-07-15 21:46:55.778913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.212 [2024-07-15 21:46:55.779134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.212 [2024-07-15 21:46:55.779168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.212 [2024-07-15 21:46:55.784713] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.212 [2024-07-15 21:46:55.784953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.212 [2024-07-15 21:46:55.784980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.212 [2024-07-15 21:46:55.789876] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.212 [2024-07-15 21:46:55.790176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.212 [2024-07-15 21:46:55.790202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.212 [2024-07-15 21:46:55.794561] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.212 [2024-07-15 21:46:55.794758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.212 [2024-07-15 21:46:55.794785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.212 [2024-07-15 21:46:55.799216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.799463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.799489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.804264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.804540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.804566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.809328] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.809638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.809665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.814981] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.815275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.815301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.820370] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.820623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.820658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.825524] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.825820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.825847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.830634] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.830905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.830931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.836510] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.836745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.836772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.841270] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.841467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.841492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.845215] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.845414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.845440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.849499] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.849762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.849801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.853540] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.853728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.853758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.857536] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.857730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.857758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.861720] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.861919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.861946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.865695] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.865890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.865918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.869713] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.869905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.869932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.873795] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.874026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.874052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.878287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.878464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.878491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.883144] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.883324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.883352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.888545] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.888809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.888835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.894174] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.894439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.894467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.899384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.899648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.899675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.904451] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.904724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.904751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.909159] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.909246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.909272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.912979] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.913073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.913098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.213 [2024-07-15 21:46:55.916817] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.213 [2024-07-15 21:46:55.916965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.213 [2024-07-15 21:46:55.916992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.214 [2024-07-15 21:46:55.920708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.214 [2024-07-15 21:46:55.920834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.214 [2024-07-15 21:46:55.920861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.214 [2024-07-15 21:46:55.924778] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.214 [2024-07-15 21:46:55.924861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.214 [2024-07-15 21:46:55.924885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.214 [2024-07-15 21:46:55.929005] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.214 [2024-07-15 21:46:55.929149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.214 [2024-07-15 21:46:55.929175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.214 [2024-07-15 21:46:55.933720] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.214 [2024-07-15 21:46:55.933833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.214 [2024-07-15 21:46:55.933859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.214 [2024-07-15 21:46:55.939627] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.214 [2024-07-15 21:46:55.939799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.214 [2024-07-15 21:46:55.939831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.214 [2024-07-15 21:46:55.943833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.214 [2024-07-15 21:46:55.943935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.214 [2024-07-15 21:46:55.943961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.214 [2024-07-15 21:46:55.947684] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.214 [2024-07-15 21:46:55.947782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.214 [2024-07-15 21:46:55.947807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.214 [2024-07-15 21:46:55.951544] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.214 [2024-07-15 21:46:55.951655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.214 [2024-07-15 21:46:55.951681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.214 [2024-07-15 21:46:55.955382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.214 [2024-07-15 21:46:55.955498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.214 [2024-07-15 21:46:55.955524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.214 [2024-07-15 21:46:55.959202] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.214 [2024-07-15 21:46:55.959341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.214 [2024-07-15 21:46:55.959368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.214 [2024-07-15 21:46:55.963300] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.214 [2024-07-15 21:46:55.963419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.214 [2024-07-15 21:46:55.963445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.214 [2024-07-15 21:46:55.967773] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.214 [2024-07-15 21:46:55.967895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.214 [2024-07-15 21:46:55.967921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.214 [2024-07-15 21:46:55.972019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.214 [2024-07-15 21:46:55.972123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.214 [2024-07-15 21:46:55.972155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.214 [2024-07-15 21:46:55.975841] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.214 [2024-07-15 21:46:55.975937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.214 [2024-07-15 21:46:55.975962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.214 [2024-07-15 21:46:55.979663] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.214 [2024-07-15 21:46:55.979744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.214 [2024-07-15 21:46:55.979768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.214 [2024-07-15 21:46:55.983486] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.214 [2024-07-15 21:46:55.983560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.214 [2024-07-15 21:46:55.983584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.214 [2024-07-15 21:46:55.987508] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.214 [2024-07-15 21:46:55.987635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.214 [2024-07-15 21:46:55.987661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.214 [2024-07-15 21:46:55.992505] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.214 [2024-07-15 21:46:55.992661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.214 [2024-07-15 21:46:55.992686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.214 [2024-07-15 21:46:55.997879] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.214 [2024-07-15 21:46:55.998007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.214 [2024-07-15 21:46:55.998032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.214 [2024-07-15 21:46:56.003506] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.214 [2024-07-15 21:46:56.003647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.214 [2024-07-15 21:46:56.003672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.007880] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.008041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.008069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.011896] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.012043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.012071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.015907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.016041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.016068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.020271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.020389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.020417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.025285] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.025404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.025432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.029691] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.029779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.029804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.033535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.033608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.033633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.037380] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.037457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.037482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.041172] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.041238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.041262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.044990] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.045068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.045092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.048806] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.048890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.048920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.052853] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.052954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.052979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.057712] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.057905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.057930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.063111] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.063226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.063252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.068404] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.068545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.068570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.074290] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.074405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.074430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.080145] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.080260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.080285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.086076] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.086272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.086300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.091132] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.091338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.091365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.096211] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.096433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.096463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.101357] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.101524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.101563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.106419] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.106577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.106606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.111857] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.112059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.112086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.117258] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.117400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.117428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.122458] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.122606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.122633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.127501] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.127687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.127713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.132595] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.132789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.132815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.474 [2024-07-15 21:46:56.138340] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.474 [2024-07-15 21:46:56.138535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.474 [2024-07-15 21:46:56.138560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.475 [2024-07-15 21:46:56.143434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.475 [2024-07-15 21:46:56.143626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.475 [2024-07-15 21:46:56.143651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.475 [2024-07-15 21:46:56.148507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.475 [2024-07-15 21:46:56.148695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.475 [2024-07-15 21:46:56.148720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.475 [2024-07-15 21:46:56.153632] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.475 [2024-07-15 21:46:56.153784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.475 [2024-07-15 21:46:56.153808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.475 [2024-07-15 21:46:56.158961] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.475 [2024-07-15 21:46:56.159099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.475 [2024-07-15 21:46:56.159125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.475 [2024-07-15 21:46:56.164824] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.475 [2024-07-15 21:46:56.164992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.475 [2024-07-15 21:46:56.165020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.475 [2024-07-15 21:46:56.170018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.475 [2024-07-15 21:46:56.170215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.475 [2024-07-15 21:46:56.170240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.475 [2024-07-15 21:46:56.175072] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.475 [2024-07-15 21:46:56.175176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.475 [2024-07-15 21:46:56.175204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.475 [2024-07-15 21:46:56.179311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.475 [2024-07-15 21:46:56.179459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.475 [2024-07-15 21:46:56.179485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.475 [2024-07-15 21:46:56.184169] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.475 [2024-07-15 21:46:56.184360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.475 [2024-07-15 21:46:56.184393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.475 [2024-07-15 21:46:56.189114] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.475 [2024-07-15 21:46:56.189296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.475 [2024-07-15 21:46:56.189322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.475 [2024-07-15 21:46:56.194202] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.475 [2024-07-15 21:46:56.194375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.475 [2024-07-15 21:46:56.194400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.475 [2024-07-15 21:46:56.199317] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.475 [2024-07-15 21:46:56.199472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.475 [2024-07-15 21:46:56.199498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.475 [2024-07-15 21:46:56.204636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.475 [2024-07-15 21:46:56.204803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.475 [2024-07-15 21:46:56.204828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.475 [2024-07-15 21:46:56.210326] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.475 [2024-07-15 21:46:56.210523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.475 [2024-07-15 21:46:56.210548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.475 [2024-07-15 21:46:56.215635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.475 [2024-07-15 21:46:56.215707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.475 [2024-07-15 21:46:56.215731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.475 [2024-07-15 21:46:56.221491] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.475 [2024-07-15 21:46:56.221672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.475 [2024-07-15 21:46:56.221698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.475 [2024-07-15 21:46:56.227023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.475 [2024-07-15 21:46:56.227135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.475 [2024-07-15 21:46:56.227167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.475 [2024-07-15 21:46:56.232416] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.475 [2024-07-15 21:46:56.232625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.475 [2024-07-15 21:46:56.232652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.475 [2024-07-15 21:46:56.237891] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.475 [2024-07-15 21:46:56.238098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.475 [2024-07-15 21:46:56.238123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.475 [2024-07-15 21:46:56.243741] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.475 [2024-07-15 21:46:56.243843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.475 [2024-07-15 21:46:56.243868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.475 [2024-07-15 21:46:56.248970] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.475 [2024-07-15 21:46:56.249165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.475 [2024-07-15 21:46:56.249198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.475 [2024-07-15 21:46:56.254055] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.475 [2024-07-15 21:46:56.254162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.475 [2024-07-15 21:46:56.254187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.475 [2024-07-15 21:46:56.258267] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.475 [2024-07-15 21:46:56.258355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.475 [2024-07-15 21:46:56.258380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.475 [2024-07-15 21:46:56.262512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.475 [2024-07-15 21:46:56.262633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.475 [2024-07-15 21:46:56.262657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.735 [2024-07-15 21:46:56.266654] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.735 [2024-07-15 21:46:56.266777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.735 [2024-07-15 21:46:56.266803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.735 [2024-07-15 21:46:56.270700] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.735 [2024-07-15 21:46:56.270788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.735 [2024-07-15 21:46:56.270821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.735 [2024-07-15 21:46:56.274819] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.735 [2024-07-15 21:46:56.274945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.735 [2024-07-15 21:46:56.274971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.735 [2024-07-15 21:46:56.278917] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.735 [2024-07-15 21:46:56.279050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.735 [2024-07-15 21:46:56.279076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.735 [2024-07-15 21:46:56.284018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.735 [2024-07-15 21:46:56.284114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.735 [2024-07-15 21:46:56.284145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.735 [2024-07-15 21:46:56.289874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.735 [2024-07-15 21:46:56.290056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.735 [2024-07-15 21:46:56.290082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.735 [2024-07-15 21:46:56.295601] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.735 [2024-07-15 21:46:56.295791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.735 [2024-07-15 21:46:56.295815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.735 [2024-07-15 21:46:56.300788] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.735 [2024-07-15 21:46:56.300994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.735 [2024-07-15 21:46:56.301019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.735 [2024-07-15 21:46:56.305871] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.735 [2024-07-15 21:46:56.306073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.735 [2024-07-15 21:46:56.306098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.735 [2024-07-15 21:46:56.310131] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.735 [2024-07-15 21:46:56.310285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.735 [2024-07-15 21:46:56.310309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.735 [2024-07-15 21:46:56.314061] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.735 [2024-07-15 21:46:56.314182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.735 [2024-07-15 21:46:56.314207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.735 [2024-07-15 21:46:56.317971] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.735 [2024-07-15 21:46:56.318066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.735 [2024-07-15 21:46:56.318090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.735 [2024-07-15 21:46:56.321958] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.735 [2024-07-15 21:46:56.322027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.322052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.326267] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.326352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.326375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.331426] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.331496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.331521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.335287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.335366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.335390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.339145] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.339212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.339236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.343044] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.343132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.343164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.346909] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.346988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.347012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.350979] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.351091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.351146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.355041] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.355111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.355146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.359057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.359155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.359181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.363129] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.363231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.363257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.367092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.367169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.367195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.371053] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.371153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.371178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.374958] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.375038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.375063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.378793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.378867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.378892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.382618] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.382687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.382718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.386733] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.386811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.386836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.391046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.391136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.391167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.394912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.394987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.395012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.398728] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.398817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.398841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.402782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.402897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.402922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.407013] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.407092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.407117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.411225] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.411291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.411315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.415402] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.415479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.415503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.419729] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.419803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.419828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.425294] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.425489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.425515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.430888] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.431076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.431100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.434849] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.434976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.435001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.438702] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.438850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.438875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.736 [2024-07-15 21:46:56.442563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.736 [2024-07-15 21:46:56.442657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.736 [2024-07-15 21:46:56.442682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.737 [2024-07-15 21:46:56.446443] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.737 [2024-07-15 21:46:56.446581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.737 [2024-07-15 21:46:56.446605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.737 [2024-07-15 21:46:56.450749] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.737 [2024-07-15 21:46:56.450840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.737 [2024-07-15 21:46:56.450864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.737 [2024-07-15 21:46:56.455416] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.737 [2024-07-15 21:46:56.455505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.737 [2024-07-15 21:46:56.455530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.737 [2024-07-15 21:46:56.459991] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.737 [2024-07-15 21:46:56.460075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.737 [2024-07-15 21:46:56.460099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.737 [2024-07-15 21:46:56.463865] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.737 [2024-07-15 21:46:56.463957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.737 [2024-07-15 21:46:56.463981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.737 [2024-07-15 21:46:56.467671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.737 [2024-07-15 21:46:56.467750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.737 [2024-07-15 21:46:56.467775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.737 [2024-07-15 21:46:56.471423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.737 [2024-07-15 21:46:56.471490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.737 [2024-07-15 21:46:56.471514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.737 [2024-07-15 21:46:56.475238] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.737 [2024-07-15 21:46:56.475327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.737 [2024-07-15 21:46:56.475351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.737 [2024-07-15 21:46:56.479066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.737 [2024-07-15 21:46:56.479146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.737 [2024-07-15 21:46:56.479171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.737 [2024-07-15 21:46:56.483376] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.737 [2024-07-15 21:46:56.483507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.737 [2024-07-15 21:46:56.483533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.737 [2024-07-15 21:46:56.487664] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.737 [2024-07-15 21:46:56.487742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.737 [2024-07-15 21:46:56.487767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.737 [2024-07-15 21:46:56.491631] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.737 [2024-07-15 21:46:56.491710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.737 [2024-07-15 21:46:56.491739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.737 [2024-07-15 21:46:56.495702] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.737 [2024-07-15 21:46:56.495782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.737 [2024-07-15 21:46:56.495806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.737 [2024-07-15 21:46:56.499755] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.737 [2024-07-15 21:46:56.499845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.737 [2024-07-15 21:46:56.499870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.737 [2024-07-15 21:46:56.503998] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.737 [2024-07-15 21:46:56.504165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.737 [2024-07-15 21:46:56.504190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.737 [2024-07-15 21:46:56.508073] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.737 [2024-07-15 21:46:56.508163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.737 [2024-07-15 21:46:56.508188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.737 [2024-07-15 21:46:56.512157] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.737 [2024-07-15 21:46:56.512225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.737 [2024-07-15 21:46:56.512250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.737 [2024-07-15 21:46:56.516299] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.737 [2024-07-15 21:46:56.516373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.737 [2024-07-15 21:46:56.516397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.737 [2024-07-15 21:46:56.520629] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.737 [2024-07-15 21:46:56.520698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.737 [2024-07-15 21:46:56.520723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.737 [2024-07-15 21:46:56.524571] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.737 [2024-07-15 21:46:56.524652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.737 [2024-07-15 21:46:56.524677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.996 [2024-07-15 21:46:56.529220] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.996 [2024-07-15 21:46:56.529312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.996 [2024-07-15 21:46:56.529338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.996 [2024-07-15 21:46:56.533602] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.996 [2024-07-15 21:46:56.533675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.996 [2024-07-15 21:46:56.533701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.996 [2024-07-15 21:46:56.537999] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.996 [2024-07-15 21:46:56.538072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.996 [2024-07-15 21:46:56.538098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.996 [2024-07-15 21:46:56.542266] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.996 [2024-07-15 21:46:56.542335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.996 [2024-07-15 21:46:56.542361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.996 [2024-07-15 21:46:56.546511] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.996 [2024-07-15 21:46:56.546580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.996 [2024-07-15 21:46:56.546605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.996 [2024-07-15 21:46:56.550776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.996 [2024-07-15 21:46:56.550879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.996 [2024-07-15 21:46:56.550904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.555114] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.555247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.555272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.559215] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.559336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.559361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.563362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.563432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.563457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.567626] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.567699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.567724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.571677] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.571743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.571768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.575916] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.575998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.576022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.580736] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.580872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.580896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.585473] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.585556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.585581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.590641] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.590747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.590771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.596404] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.596589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.596614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.601481] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.601672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.601697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.606841] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.607022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.607056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.611995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.612200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.612228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.617002] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.617165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.617192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.622294] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.622440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.622466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.627475] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.627669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.627695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.632575] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.632748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.632773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.637645] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.637824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.637849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.642707] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.642902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.642928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.647788] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.647985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.648009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.652892] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.653065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.653090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.658655] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.658912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.658939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.664971] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.665146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.665171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.670672] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.670865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.670891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.676284] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.676406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.676431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.681282] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.681348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.681373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.685385] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.685460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.685486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.689203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.689276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.689300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.997 [2024-07-15 21:46:56.693001] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.997 [2024-07-15 21:46:56.693067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.997 [2024-07-15 21:46:56.693092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.998 [2024-07-15 21:46:56.696840] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.998 [2024-07-15 21:46:56.696917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.998 [2024-07-15 21:46:56.696941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.998 [2024-07-15 21:46:56.700684] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.998 [2024-07-15 21:46:56.700764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.998 [2024-07-15 21:46:56.700789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.998 [2024-07-15 21:46:56.704486] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.998 [2024-07-15 21:46:56.704589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.998 [2024-07-15 21:46:56.704613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.998 [2024-07-15 21:46:56.708368] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.998 [2024-07-15 21:46:56.708433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.998 [2024-07-15 21:46:56.708458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.998 [2024-07-15 21:46:56.712176] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.998 [2024-07-15 21:46:56.712253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.998 [2024-07-15 21:46:56.712277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.998 [2024-07-15 21:46:56.716006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.998 [2024-07-15 21:46:56.716082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.998 [2024-07-15 21:46:56.716107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.998 [2024-07-15 21:46:56.719796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.998 [2024-07-15 21:46:56.719874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.998 [2024-07-15 21:46:56.719897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.998 [2024-07-15 21:46:56.723626] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.998 [2024-07-15 21:46:56.723700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.998 [2024-07-15 21:46:56.723723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.998 [2024-07-15 21:46:56.727453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.998 [2024-07-15 21:46:56.727523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.998 [2024-07-15 21:46:56.727553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.998 [2024-07-15 21:46:56.731269] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.998 [2024-07-15 21:46:56.731333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.998 [2024-07-15 21:46:56.731357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.998 [2024-07-15 21:46:56.735063] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.998 [2024-07-15 21:46:56.735133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.998 [2024-07-15 21:46:56.735163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.998 [2024-07-15 21:46:56.738879] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.998 [2024-07-15 21:46:56.738953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.998 [2024-07-15 21:46:56.738977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.998 [2024-07-15 21:46:56.742928] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.998 [2024-07-15 21:46:56.743017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.998 [2024-07-15 21:46:56.743043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.998 [2024-07-15 21:46:56.746759] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.998 [2024-07-15 21:46:56.746841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.998 [2024-07-15 21:46:56.746866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.998 [2024-07-15 21:46:56.750559] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.998 [2024-07-15 21:46:56.750628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.998 [2024-07-15 21:46:56.750652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.998 [2024-07-15 21:46:56.754367] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.998 [2024-07-15 21:46:56.754432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.998 [2024-07-15 21:46:56.754456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.998 [2024-07-15 21:46:56.758210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.998 [2024-07-15 21:46:56.758277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.998 [2024-07-15 21:46:56.758301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.998 [2024-07-15 21:46:56.762030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.998 [2024-07-15 21:46:56.762142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.998 [2024-07-15 21:46:56.762173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.998 [2024-07-15 21:46:56.765885] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.998 [2024-07-15 21:46:56.765967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.998 [2024-07-15 21:46:56.765993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:05.998 [2024-07-15 21:46:56.770269] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.998 [2024-07-15 21:46:56.770440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.998 [2024-07-15 21:46:56.770465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:05.998 [2024-07-15 21:46:56.775244] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.998 [2024-07-15 21:46:56.775398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.998 [2024-07-15 21:46:56.775422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:05.998 [2024-07-15 21:46:56.780670] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.998 [2024-07-15 21:46:56.780861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.998 [2024-07-15 21:46:56.780885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.998 [2024-07-15 21:46:56.786299] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:05.998 [2024-07-15 21:46:56.786450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.998 [2024-07-15 21:46:56.786477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.257 [2024-07-15 21:46:56.791353] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:06.257 [2024-07-15 21:46:56.791539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 [2024-07-15 21:46:56.791566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.257 [2024-07-15 21:46:56.796441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:06.257 [2024-07-15 21:46:56.796586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 [2024-07-15 21:46:56.796613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.257 [2024-07-15 21:46:56.802399] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:06.257 [2024-07-15 21:46:56.802584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 [2024-07-15 21:46:56.802611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.257 [2024-07-15 21:46:56.807433] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:06.257 [2024-07-15 21:46:56.807548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 [2024-07-15 21:46:56.807572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.257 [2024-07-15 21:46:56.812475] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:06.257 [2024-07-15 21:46:56.812615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 [2024-07-15 21:46:56.812653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.257 [2024-07-15 21:46:56.817607] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:06.257 [2024-07-15 21:46:56.817701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 [2024-07-15 21:46:56.817725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.257 [2024-07-15 21:46:56.822722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:06.257 [2024-07-15 21:46:56.822889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 [2024-07-15 21:46:56.822916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.257 [2024-07-15 21:46:56.828193] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:06.257 [2024-07-15 21:46:56.828360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 [2024-07-15 21:46:56.828395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.257 [2024-07-15 21:46:56.833520] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:06.257 [2024-07-15 21:46:56.833697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 [2024-07-15 21:46:56.833723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.257 [2024-07-15 21:46:56.838600] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xbbd060) with pdu=0x2000190fef90 00:25:06.257 [2024-07-15 21:46:56.838785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.257 [2024-07-15 21:46:56.838811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.257 00:25:06.257 Latency(us) 00:25:06.257 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.257 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:06.257 nvme0n1 : 2.00 6555.19 819.40 0.00 0.00 2434.15 1674.81 6699.24 00:25:06.257 =================================================================================================================== 00:25:06.257 Total : 6555.19 819.40 0.00 0.00 2434.15 1674.81 6699.24 00:25:06.257 0 00:25:06.257 21:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:06.257 21:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:06.257 21:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:06.257 | .driver_specific 00:25:06.257 | .nvme_error 00:25:06.257 | .status_code 00:25:06.257 | .command_transient_transport_error' 00:25:06.257 21:46:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:06.515 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 423 > 0 )) 00:25:06.515 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 433098 00:25:06.515 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 433098 ']' 00:25:06.515 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 433098 00:25:06.515 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:06.515 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:06.515 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 433098 00:25:06.515 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:06.515 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:06.515 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 433098' 00:25:06.515 killing process with pid 433098 00:25:06.515 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 433098 00:25:06.515 Received shutdown signal, test time was about 2.000000 seconds 00:25:06.515 00:25:06.515 Latency(us) 00:25:06.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.515 =================================================================================================================== 00:25:06.515 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:06.515 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 433098 00:25:06.774 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 431950 00:25:06.774 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 431950 ']' 00:25:06.774 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 431950 00:25:06.774 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:06.774 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:06.774 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 431950 00:25:06.774 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:06.774 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:06.774 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 431950' 00:25:06.774 killing process with pid 431950 00:25:06.774 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 431950 00:25:06.774 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 431950 00:25:06.774 00:25:06.774 real 0m15.255s 00:25:06.774 user 0m30.651s 00:25:06.774 sys 0m4.204s 00:25:06.774 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:06.774 21:46:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:06.774 ************************************ 00:25:06.774 END TEST nvmf_digest_error 00:25:06.774 ************************************ 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:07.033 rmmod nvme_tcp 00:25:07.033 rmmod nvme_fabrics 00:25:07.033 rmmod nvme_keyring 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 431950 ']' 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 431950 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 431950 ']' 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 431950 00:25:07.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (431950) - No such process 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 431950 is not found' 00:25:07.033 Process with pid 431950 is not found 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:07.033 21:46:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.941 21:46:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:08.941 00:25:08.941 real 0m34.768s 00:25:08.941 user 1m2.091s 00:25:08.941 sys 0m9.574s 00:25:08.941 21:46:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:08.941 21:46:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:08.941 ************************************ 00:25:08.941 END TEST nvmf_digest 00:25:08.941 ************************************ 00:25:08.941 21:46:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:08.941 21:46:59 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:25:08.941 21:46:59 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:25:08.941 21:46:59 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:25:08.941 21:46:59 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:08.941 21:46:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:08.941 21:46:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:08.941 21:46:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:09.199 ************************************ 00:25:09.199 START TEST nvmf_bdevperf 00:25:09.199 ************************************ 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:09.199 * Looking for test storage... 00:25:09.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:09.199 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:09.200 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:09.200 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.200 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.200 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:09.200 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:09.200 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:09.200 21:46:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:09.200 21:46:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:09.200 21:46:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:09.200 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:09.200 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.200 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:09.200 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:09.200 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:09.200 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.200 21:46:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:09.200 21:46:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.200 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:09.200 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:09.200 21:46:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:25:09.200 21:46:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:25:11.104 Found 0000:08:00.0 (0x8086 - 0x159b) 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:11.104 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:25:11.105 Found 0000:08:00.1 (0x8086 - 0x159b) 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:25:11.105 Found net devices under 0000:08:00.0: cvl_0_0 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:25:11.105 Found net devices under 0000:08:00.1: cvl_0_1 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:11.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:11.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:25:11.105 00:25:11.105 --- 10.0.0.2 ping statistics --- 00:25:11.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.105 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:11.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:11.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:25:11.105 00:25:11.105 --- 10.0.0.1 ping statistics --- 00:25:11.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.105 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=434925 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 434925 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 434925 ']' 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:11.105 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:11.105 [2024-07-15 21:47:01.605876] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:25:11.105 [2024-07-15 21:47:01.605979] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.105 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.105 [2024-07-15 21:47:01.671954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:11.105 [2024-07-15 21:47:01.792054] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.105 [2024-07-15 21:47:01.792113] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.105 [2024-07-15 21:47:01.792129] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.105 [2024-07-15 21:47:01.792152] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.105 [2024-07-15 21:47:01.792165] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.105 [2024-07-15 21:47:01.792574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:11.105 [2024-07-15 21:47:01.792655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:11.105 [2024-07-15 21:47:01.792660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.363 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:11.363 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:25:11.363 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:11.363 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:11.363 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:11.363 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:11.363 21:47:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:11.363 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.363 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:11.363 [2024-07-15 21:47:01.929398] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:11.363 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.363 21:47:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:11.363 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.363 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:11.363 Malloc0 00:25:11.363 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.363 21:47:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:11.363 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.363 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:11.364 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.364 21:47:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:11.364 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.364 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:11.364 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.364 21:47:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:11.364 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.364 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:11.364 [2024-07-15 21:47:01.991433] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:11.364 21:47:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.364 21:47:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:11.364 21:47:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:11.364 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:25:11.364 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:25:11.364 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:11.364 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:11.364 { 00:25:11.364 "params": { 00:25:11.364 "name": "Nvme$subsystem", 00:25:11.364 "trtype": "$TEST_TRANSPORT", 00:25:11.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.364 "adrfam": "ipv4", 00:25:11.364 "trsvcid": "$NVMF_PORT", 00:25:11.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.364 "hdgst": ${hdgst:-false}, 00:25:11.364 "ddgst": ${ddgst:-false} 00:25:11.364 }, 00:25:11.364 "method": "bdev_nvme_attach_controller" 00:25:11.364 } 00:25:11.364 EOF 00:25:11.364 )") 00:25:11.364 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:25:11.364 21:47:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:25:11.364 21:47:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:25:11.364 21:47:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:11.364 "params": { 00:25:11.364 "name": "Nvme1", 00:25:11.364 "trtype": "tcp", 00:25:11.364 "traddr": "10.0.0.2", 00:25:11.364 "adrfam": "ipv4", 00:25:11.364 "trsvcid": "4420", 00:25:11.364 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.364 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:11.364 "hdgst": false, 00:25:11.364 "ddgst": false 00:25:11.364 }, 00:25:11.364 "method": "bdev_nvme_attach_controller" 00:25:11.364 }' 00:25:11.364 [2024-07-15 21:47:02.043354] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:25:11.364 [2024-07-15 21:47:02.043449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434950 ] 00:25:11.364 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.364 [2024-07-15 21:47:02.104810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.621 [2024-07-15 21:47:02.222799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.621 Running I/O for 1 seconds... 00:25:12.989 00:25:12.989 Latency(us) 00:25:12.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.989 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:12.989 Verification LBA range: start 0x0 length 0x4000 00:25:12.989 Nvme1n1 : 1.01 8858.87 34.60 0.00 0.00 14385.02 2864.17 14660.65 00:25:12.989 =================================================================================================================== 00:25:12.989 Total : 8858.87 34.60 0.00 0.00 14385.02 2864.17 14660.65 00:25:12.989 21:47:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=435149 00:25:12.989 21:47:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:12.989 21:47:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:12.989 21:47:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:12.989 21:47:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:25:12.989 21:47:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:25:12.989 21:47:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:12.989 21:47:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:12.989 { 00:25:12.989 "params": { 00:25:12.989 "name": "Nvme$subsystem", 00:25:12.989 "trtype": "$TEST_TRANSPORT", 00:25:12.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:12.989 "adrfam": "ipv4", 00:25:12.989 "trsvcid": "$NVMF_PORT", 00:25:12.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:12.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:12.989 "hdgst": ${hdgst:-false}, 00:25:12.989 "ddgst": ${ddgst:-false} 00:25:12.989 }, 00:25:12.989 "method": "bdev_nvme_attach_controller" 00:25:12.989 } 00:25:12.989 EOF 00:25:12.989 )") 00:25:12.989 21:47:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:25:12.989 21:47:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:25:12.989 21:47:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:25:12.989 21:47:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:12.989 "params": { 00:25:12.989 "name": "Nvme1", 00:25:12.989 "trtype": "tcp", 00:25:12.989 "traddr": "10.0.0.2", 00:25:12.989 "adrfam": "ipv4", 00:25:12.989 "trsvcid": "4420", 00:25:12.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:12.989 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:12.989 "hdgst": false, 00:25:12.989 "ddgst": false 00:25:12.989 }, 00:25:12.989 "method": "bdev_nvme_attach_controller" 00:25:12.989 }' 00:25:12.989 [2024-07-15 21:47:03.640820] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:25:12.989 [2024-07-15 21:47:03.640917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435149 ] 00:25:12.989 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.989 [2024-07-15 21:47:03.697642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.246 [2024-07-15 21:47:03.803479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.246 Running I/O for 15 seconds... 00:25:16.527 21:47:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 434925 00:25:16.527 21:47:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:25:16.527 [2024-07-15 21:47:06.604857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:52632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.527 [2024-07-15 21:47:06.604900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.604931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:52640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.527 [2024-07-15 21:47:06.604947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.604965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:52648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.527 [2024-07-15 21:47:06.604979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.604995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.527 [2024-07-15 21:47:06.605009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.527 [2024-07-15 21:47:06.605039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.527 [2024-07-15 21:47:06.605070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:52680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.527 [2024-07-15 21:47:06.605100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.527 [2024-07-15 21:47:06.605166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:52696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.527 [2024-07-15 21:47:06.605219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:52704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.527 [2024-07-15 21:47:06.605248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:52712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.527 [2024-07-15 21:47:06.605279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.527 [2024-07-15 21:47:06.605309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.527 [2024-07-15 21:47:06.605341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:52736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.527 [2024-07-15 21:47:06.605370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.527 [2024-07-15 21:47:06.605398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:52752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.527 [2024-07-15 21:47:06.605426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:53456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.527 [2024-07-15 21:47:06.605454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.527 [2024-07-15 21:47:06.605481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.527 [2024-07-15 21:47:06.605508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.527 [2024-07-15 21:47:06.605537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.527 [2024-07-15 21:47:06.605569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.527 [2024-07-15 21:47:06.605597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:53504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.527 [2024-07-15 21:47:06.605625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.527 [2024-07-15 21:47:06.605652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.527 [2024-07-15 21:47:06.605679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.527 [2024-07-15 21:47:06.605707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.527 [2024-07-15 21:47:06.605734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.527 [2024-07-15 21:47:06.605761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.527 [2024-07-15 21:47:06.605788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.527 [2024-07-15 21:47:06.605816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.527 [2024-07-15 21:47:06.605843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.527 [2024-07-15 21:47:06.605870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.527 [2024-07-15 21:47:06.605901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.527 [2024-07-15 21:47:06.605916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.527 [2024-07-15 21:47:06.605929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.605944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.528 [2024-07-15 21:47:06.605956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.605971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.528 [2024-07-15 21:47:06.605984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.605999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.528 [2024-07-15 21:47:06.606012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.528 [2024-07-15 21:47:06.606039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.528 [2024-07-15 21:47:06.606067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.528 [2024-07-15 21:47:06.606094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:52768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:52800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:52824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:52832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:52848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:52856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:52864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:52888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:52896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:52904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:52912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:52944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:52960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:52968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:52976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.606974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.606990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.607005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.607018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.607033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.607045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.607060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.607073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.607087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.607100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.607114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.528 [2024-07-15 21:47:06.607145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.528 [2024-07-15 21:47:06.607162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:53056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:53112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:53184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:53192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:16.529 [2024-07-15 21:47:06.607723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.607981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.607995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.608008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.608023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.608035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.608050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.608062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.608080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.608093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.608107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.608120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.608153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.608167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.608194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.608207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.608222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.608235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.608249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.608262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.608277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.608290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.608304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.608321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.608336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:53360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.608353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.608368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.608381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.529 [2024-07-15 21:47:06.608396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.529 [2024-07-15 21:47:06.608409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.530 [2024-07-15 21:47:06.608424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:53384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.530 [2024-07-15 21:47:06.608436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.530 [2024-07-15 21:47:06.608451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.530 [2024-07-15 21:47:06.608467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.530 [2024-07-15 21:47:06.608482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.530 [2024-07-15 21:47:06.608495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.530 [2024-07-15 21:47:06.608510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.530 [2024-07-15 21:47:06.608522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.530 [2024-07-15 21:47:06.608537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.530 [2024-07-15 21:47:06.608550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.530 [2024-07-15 21:47:06.608564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.530 [2024-07-15 21:47:06.608577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.530 [2024-07-15 21:47:06.608592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.530 [2024-07-15 21:47:06.608605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.530 [2024-07-15 21:47:06.608619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.530 [2024-07-15 21:47:06.608632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.530 [2024-07-15 21:47:06.608646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf94770 is same with the state(5) to be set 00:25:16.530 [2024-07-15 21:47:06.608661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:16.530 [2024-07-15 21:47:06.608673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:16.530 [2024-07-15 21:47:06.608684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53448 len:8 PRP1 0x0 PRP2 0x0 00:25:16.530 [2024-07-15 21:47:06.608697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:16.530 [2024-07-15 21:47:06.608749] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf94770 was disconnected and freed. reset controller. 00:25:16.530 [2024-07-15 21:47:06.612452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.530 [2024-07-15 21:47:06.612515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.530 [2024-07-15 21:47:06.613153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.530 [2024-07-15 21:47:06.613232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.530 [2024-07-15 21:47:06.613248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.530 [2024-07-15 21:47:06.613481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.530 [2024-07-15 21:47:06.613706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.530 [2024-07-15 21:47:06.613725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.530 [2024-07-15 21:47:06.613741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.530 [2024-07-15 21:47:06.617168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.530 [2024-07-15 21:47:06.626414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.530 [2024-07-15 21:47:06.626877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.530 [2024-07-15 21:47:06.626913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.530 [2024-07-15 21:47:06.626930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.530 [2024-07-15 21:47:06.627167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.530 [2024-07-15 21:47:06.627393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.530 [2024-07-15 21:47:06.627411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.530 [2024-07-15 21:47:06.627424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.530 [2024-07-15 21:47:06.630803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.530 [2024-07-15 21:47:06.640186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.530 [2024-07-15 21:47:06.640680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.530 [2024-07-15 21:47:06.640739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.530 [2024-07-15 21:47:06.640755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.530 [2024-07-15 21:47:06.641002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.530 [2024-07-15 21:47:06.641260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.530 [2024-07-15 21:47:06.641278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.530 [2024-07-15 21:47:06.641303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.530 [2024-07-15 21:47:06.644592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.530 [2024-07-15 21:47:06.653813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.530 [2024-07-15 21:47:06.654227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.530 [2024-07-15 21:47:06.654261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.530 [2024-07-15 21:47:06.654276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.530 [2024-07-15 21:47:06.654523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.530 [2024-07-15 21:47:06.654765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.530 [2024-07-15 21:47:06.654782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.530 [2024-07-15 21:47:06.654807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.530 [2024-07-15 21:47:06.658037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.530 [2024-07-15 21:47:06.667176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.530 [2024-07-15 21:47:06.667608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.530 [2024-07-15 21:47:06.667633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.530 [2024-07-15 21:47:06.667664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.530 [2024-07-15 21:47:06.667885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.530 [2024-07-15 21:47:06.668128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.530 [2024-07-15 21:47:06.668165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.530 [2024-07-15 21:47:06.668178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.530 [2024-07-15 21:47:06.671332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.530 [2024-07-15 21:47:06.680519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.530 [2024-07-15 21:47:06.680938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.530 [2024-07-15 21:47:06.680973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.530 [2024-07-15 21:47:06.680990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.530 [2024-07-15 21:47:06.681248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.530 [2024-07-15 21:47:06.681482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.530 [2024-07-15 21:47:06.681499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.530 [2024-07-15 21:47:06.681523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.530 [2024-07-15 21:47:06.684792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.530 [2024-07-15 21:47:06.693968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.530 [2024-07-15 21:47:06.694458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.530 [2024-07-15 21:47:06.694490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.530 [2024-07-15 21:47:06.694506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.530 [2024-07-15 21:47:06.694742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.530 [2024-07-15 21:47:06.694974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.530 [2024-07-15 21:47:06.694991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.530 [2024-07-15 21:47:06.695015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.530 [2024-07-15 21:47:06.698158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.530 [2024-07-15 21:47:06.707355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.530 [2024-07-15 21:47:06.707831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.530 [2024-07-15 21:47:06.707882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.530 [2024-07-15 21:47:06.707898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.530 [2024-07-15 21:47:06.708134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.530 [2024-07-15 21:47:06.708375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.530 [2024-07-15 21:47:06.708396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.530 [2024-07-15 21:47:06.708421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.530 [2024-07-15 21:47:06.711621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.531 [2024-07-15 21:47:06.720796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.531 [2024-07-15 21:47:06.721207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.531 [2024-07-15 21:47:06.721255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.531 [2024-07-15 21:47:06.721271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.531 [2024-07-15 21:47:06.721495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.531 [2024-07-15 21:47:06.721726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.531 [2024-07-15 21:47:06.721743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.531 [2024-07-15 21:47:06.721766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.531 [2024-07-15 21:47:06.725050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.531 [2024-07-15 21:47:06.734321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.531 [2024-07-15 21:47:06.734855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.531 [2024-07-15 21:47:06.734898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.531 [2024-07-15 21:47:06.734914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.531 [2024-07-15 21:47:06.735148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.531 [2024-07-15 21:47:06.735382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.531 [2024-07-15 21:47:06.735399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.531 [2024-07-15 21:47:06.735423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.531 [2024-07-15 21:47:06.738622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.531 [2024-07-15 21:47:06.747847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.531 [2024-07-15 21:47:06.748289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.531 [2024-07-15 21:47:06.748320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.531 [2024-07-15 21:47:06.748347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.531 [2024-07-15 21:47:06.748571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.531 [2024-07-15 21:47:06.748794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.531 [2024-07-15 21:47:06.748812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.531 [2024-07-15 21:47:06.748824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.531 [2024-07-15 21:47:06.752010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.531 [2024-07-15 21:47:06.761451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.531 [2024-07-15 21:47:06.761861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.531 [2024-07-15 21:47:06.761910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.531 [2024-07-15 21:47:06.761925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.531 [2024-07-15 21:47:06.762177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.531 [2024-07-15 21:47:06.762422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.531 [2024-07-15 21:47:06.762440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.531 [2024-07-15 21:47:06.762464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.531 [2024-07-15 21:47:06.765655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.531 [2024-07-15 21:47:06.774972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.531 [2024-07-15 21:47:06.775316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.531 [2024-07-15 21:47:06.775359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.531 [2024-07-15 21:47:06.775374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.531 [2024-07-15 21:47:06.775609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.531 [2024-07-15 21:47:06.775841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.531 [2024-07-15 21:47:06.775858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.531 [2024-07-15 21:47:06.775882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.531 [2024-07-15 21:47:06.779022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.531 [2024-07-15 21:47:06.788466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.531 [2024-07-15 21:47:06.788900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.531 [2024-07-15 21:47:06.788947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.531 [2024-07-15 21:47:06.788962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.531 [2024-07-15 21:47:06.789230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.531 [2024-07-15 21:47:06.789464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.531 [2024-07-15 21:47:06.789481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.531 [2024-07-15 21:47:06.789505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.531 [2024-07-15 21:47:06.792615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.531 [2024-07-15 21:47:06.801835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.531 [2024-07-15 21:47:06.802199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.531 [2024-07-15 21:47:06.802232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.531 [2024-07-15 21:47:06.802258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.531 [2024-07-15 21:47:06.802489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.531 [2024-07-15 21:47:06.802722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.531 [2024-07-15 21:47:06.802739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.531 [2024-07-15 21:47:06.802763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.531 [2024-07-15 21:47:06.805902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.531 [2024-07-15 21:47:06.815336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.531 [2024-07-15 21:47:06.815838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.531 [2024-07-15 21:47:06.815869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.531 [2024-07-15 21:47:06.815884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.531 [2024-07-15 21:47:06.816119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.531 [2024-07-15 21:47:06.816362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.531 [2024-07-15 21:47:06.816380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.531 [2024-07-15 21:47:06.816403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.531 [2024-07-15 21:47:06.819541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.531 [2024-07-15 21:47:06.828773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.531 [2024-07-15 21:47:06.829217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.531 [2024-07-15 21:47:06.829263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.531 [2024-07-15 21:47:06.829279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.531 [2024-07-15 21:47:06.829502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.531 [2024-07-15 21:47:06.829737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.532 [2024-07-15 21:47:06.829754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.532 [2024-07-15 21:47:06.829777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.532 [2024-07-15 21:47:06.832914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.532 [2024-07-15 21:47:06.842395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.532 [2024-07-15 21:47:06.842730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.532 [2024-07-15 21:47:06.842766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.532 [2024-07-15 21:47:06.842780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.532 [2024-07-15 21:47:06.843010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.532 [2024-07-15 21:47:06.843264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.532 [2024-07-15 21:47:06.843282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.532 [2024-07-15 21:47:06.843310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.532 [2024-07-15 21:47:06.846507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.532 [2024-07-15 21:47:06.855942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.532 [2024-07-15 21:47:06.856348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.532 [2024-07-15 21:47:06.856404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.532 [2024-07-15 21:47:06.856418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.532 [2024-07-15 21:47:06.856650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.532 [2024-07-15 21:47:06.856873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.532 [2024-07-15 21:47:06.856891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.532 [2024-07-15 21:47:06.856904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.532 [2024-07-15 21:47:06.860641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.532 [2024-07-15 21:47:06.869823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.532 [2024-07-15 21:47:06.870161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.532 [2024-07-15 21:47:06.870188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.532 [2024-07-15 21:47:06.870215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.532 [2024-07-15 21:47:06.870436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.532 [2024-07-15 21:47:06.870659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.532 [2024-07-15 21:47:06.870677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.532 [2024-07-15 21:47:06.870690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.532 [2024-07-15 21:47:06.874095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.532 [2024-07-15 21:47:06.883461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.532 [2024-07-15 21:47:06.883816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.532 [2024-07-15 21:47:06.883872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.532 [2024-07-15 21:47:06.883887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.532 [2024-07-15 21:47:06.884113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.532 [2024-07-15 21:47:06.884351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.532 [2024-07-15 21:47:06.884370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.532 [2024-07-15 21:47:06.884383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.532 [2024-07-15 21:47:06.887762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.532 [2024-07-15 21:47:06.897128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.532 [2024-07-15 21:47:06.897507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.532 [2024-07-15 21:47:06.897562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.532 [2024-07-15 21:47:06.897579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.532 [2024-07-15 21:47:06.897806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.532 [2024-07-15 21:47:06.898030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.532 [2024-07-15 21:47:06.898048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.532 [2024-07-15 21:47:06.898061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.532 [2024-07-15 21:47:06.901452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.532 [2024-07-15 21:47:06.910590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.532 [2024-07-15 21:47:06.911175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.532 [2024-07-15 21:47:06.911207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.532 [2024-07-15 21:47:06.911233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.532 [2024-07-15 21:47:06.911459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.532 [2024-07-15 21:47:06.911690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.532 [2024-07-15 21:47:06.911707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.532 [2024-07-15 21:47:06.911731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.532 [2024-07-15 21:47:06.914875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.532 [2024-07-15 21:47:06.924057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.532 [2024-07-15 21:47:06.924411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.532 [2024-07-15 21:47:06.924454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.532 [2024-07-15 21:47:06.924470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.532 [2024-07-15 21:47:06.924705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.532 [2024-07-15 21:47:06.924936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.532 [2024-07-15 21:47:06.924954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.532 [2024-07-15 21:47:06.924977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.532 [2024-07-15 21:47:06.928120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.532 [2024-07-15 21:47:06.937473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.532 [2024-07-15 21:47:06.937909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.532 [2024-07-15 21:47:06.937956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.532 [2024-07-15 21:47:06.937970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.532 [2024-07-15 21:47:06.938233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.532 [2024-07-15 21:47:06.938464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.532 [2024-07-15 21:47:06.938481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.532 [2024-07-15 21:47:06.938505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.532 [2024-07-15 21:47:06.941611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.532 [2024-07-15 21:47:06.950953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.532 [2024-07-15 21:47:06.951374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.532 [2024-07-15 21:47:06.951406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.532 [2024-07-15 21:47:06.951421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.532 [2024-07-15 21:47:06.951656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.532 [2024-07-15 21:47:06.951893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.532 [2024-07-15 21:47:06.951910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.532 [2024-07-15 21:47:06.951933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.532 [2024-07-15 21:47:06.955074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.532 [2024-07-15 21:47:06.964281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.532 [2024-07-15 21:47:06.964730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.532 [2024-07-15 21:47:06.964762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.532 [2024-07-15 21:47:06.964777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.532 [2024-07-15 21:47:06.965013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.532 [2024-07-15 21:47:06.965282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.532 [2024-07-15 21:47:06.965300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.532 [2024-07-15 21:47:06.965312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.532 [2024-07-15 21:47:06.968428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.532 [2024-07-15 21:47:06.977776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.532 [2024-07-15 21:47:06.978327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.532 [2024-07-15 21:47:06.978359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.532 [2024-07-15 21:47:06.978374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.532 [2024-07-15 21:47:06.978609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.532 [2024-07-15 21:47:06.978841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.533 [2024-07-15 21:47:06.978858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.533 [2024-07-15 21:47:06.978881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.533 [2024-07-15 21:47:06.982048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.533 [2024-07-15 21:47:06.991216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.533 [2024-07-15 21:47:06.991714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.533 [2024-07-15 21:47:06.991745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.533 [2024-07-15 21:47:06.991760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.533 [2024-07-15 21:47:06.991968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.533 [2024-07-15 21:47:06.992196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.533 [2024-07-15 21:47:06.992215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.533 [2024-07-15 21:47:06.992238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.533 [2024-07-15 21:47:06.995337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.533 [2024-07-15 21:47:07.004669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.533 [2024-07-15 21:47:07.005007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.533 [2024-07-15 21:47:07.005051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.533 [2024-07-15 21:47:07.005066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.533 [2024-07-15 21:47:07.005338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.533 [2024-07-15 21:47:07.005571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.533 [2024-07-15 21:47:07.005588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.533 [2024-07-15 21:47:07.005612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.533 [2024-07-15 21:47:07.008721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.533 [2024-07-15 21:47:07.018075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.533 [2024-07-15 21:47:07.018552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.533 [2024-07-15 21:47:07.018600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.533 [2024-07-15 21:47:07.018615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.533 [2024-07-15 21:47:07.018839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.533 [2024-07-15 21:47:07.019071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.533 [2024-07-15 21:47:07.019088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.533 [2024-07-15 21:47:07.019112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.533 [2024-07-15 21:47:07.022260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.533 [2024-07-15 21:47:07.031431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.533 [2024-07-15 21:47:07.031859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.533 [2024-07-15 21:47:07.031895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.533 [2024-07-15 21:47:07.031922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.533 [2024-07-15 21:47:07.032167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.533 [2024-07-15 21:47:07.032390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.533 [2024-07-15 21:47:07.032408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.533 [2024-07-15 21:47:07.032431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.533 [2024-07-15 21:47:07.035544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.533 [2024-07-15 21:47:07.044884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.533 [2024-07-15 21:47:07.045322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.533 [2024-07-15 21:47:07.045377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.533 [2024-07-15 21:47:07.045390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.533 [2024-07-15 21:47:07.045621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.533 [2024-07-15 21:47:07.045851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.533 [2024-07-15 21:47:07.045868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.533 [2024-07-15 21:47:07.045892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.533 [2024-07-15 21:47:07.049020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.533 [2024-07-15 21:47:07.058361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.533 [2024-07-15 21:47:07.058676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.533 [2024-07-15 21:47:07.058762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.533 [2024-07-15 21:47:07.058776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.533 [2024-07-15 21:47:07.058995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.533 [2024-07-15 21:47:07.059239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.533 [2024-07-15 21:47:07.059256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.533 [2024-07-15 21:47:07.059269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.533 [2024-07-15 21:47:07.062364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.533 [2024-07-15 21:47:07.071683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.533 [2024-07-15 21:47:07.072178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.533 [2024-07-15 21:47:07.072210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.533 [2024-07-15 21:47:07.072226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.533 [2024-07-15 21:47:07.072433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.533 [2024-07-15 21:47:07.072642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.533 [2024-07-15 21:47:07.072658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.533 [2024-07-15 21:47:07.072670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.533 [2024-07-15 21:47:07.075792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.533 [2024-07-15 21:47:07.085143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.533 [2024-07-15 21:47:07.085607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.533 [2024-07-15 21:47:07.085639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.533 [2024-07-15 21:47:07.085654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.533 [2024-07-15 21:47:07.085862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.533 [2024-07-15 21:47:07.086067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.533 [2024-07-15 21:47:07.086083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.533 [2024-07-15 21:47:07.086095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.533 [2024-07-15 21:47:07.089239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.533 [2024-07-15 21:47:07.098582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.533 [2024-07-15 21:47:07.098903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.533 [2024-07-15 21:47:07.098927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.533 [2024-07-15 21:47:07.098941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.533 [2024-07-15 21:47:07.099181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.533 [2024-07-15 21:47:07.099402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.533 [2024-07-15 21:47:07.099419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.533 [2024-07-15 21:47:07.099442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.533 [2024-07-15 21:47:07.102547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.533 [2024-07-15 21:47:07.112083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.533 [2024-07-15 21:47:07.112654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.533 [2024-07-15 21:47:07.112708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.533 [2024-07-15 21:47:07.112724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.533 [2024-07-15 21:47:07.112962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.533 [2024-07-15 21:47:07.113236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.533 [2024-07-15 21:47:07.113254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.533 [2024-07-15 21:47:07.113279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.533 [2024-07-15 21:47:07.116831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.533 [2024-07-15 21:47:07.125640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.533 [2024-07-15 21:47:07.126045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.533 [2024-07-15 21:47:07.126103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.534 [2024-07-15 21:47:07.126118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.534 [2024-07-15 21:47:07.126350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.534 [2024-07-15 21:47:07.126561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.534 [2024-07-15 21:47:07.126578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.534 [2024-07-15 21:47:07.126591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.534 [2024-07-15 21:47:07.129861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.534 [2024-07-15 21:47:07.139281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.534 [2024-07-15 21:47:07.139600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.534 [2024-07-15 21:47:07.139693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.534 [2024-07-15 21:47:07.139708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.534 [2024-07-15 21:47:07.139926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.534 [2024-07-15 21:47:07.140177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.534 [2024-07-15 21:47:07.140195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.534 [2024-07-15 21:47:07.140207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.534 [2024-07-15 21:47:07.143314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.534 [2024-07-15 21:47:07.152605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.534 [2024-07-15 21:47:07.153008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.534 [2024-07-15 21:47:07.153061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.534 [2024-07-15 21:47:07.153074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.534 [2024-07-15 21:47:07.153327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.534 [2024-07-15 21:47:07.153532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.534 [2024-07-15 21:47:07.153548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.534 [2024-07-15 21:47:07.153560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.534 [2024-07-15 21:47:07.156672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.534 [2024-07-15 21:47:07.166027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.534 [2024-07-15 21:47:07.166468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.534 [2024-07-15 21:47:07.166516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.534 [2024-07-15 21:47:07.166536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.534 [2024-07-15 21:47:07.166744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.534 [2024-07-15 21:47:07.166949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.534 [2024-07-15 21:47:07.166966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.534 [2024-07-15 21:47:07.166978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.534 [2024-07-15 21:47:07.170104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.534 [2024-07-15 21:47:07.179493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.534 [2024-07-15 21:47:07.179936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.534 [2024-07-15 21:47:07.179987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.534 [2024-07-15 21:47:07.180002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.534 [2024-07-15 21:47:07.180263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.534 [2024-07-15 21:47:07.180495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.534 [2024-07-15 21:47:07.180512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.534 [2024-07-15 21:47:07.180536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.534 [2024-07-15 21:47:07.183657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.534 [2024-07-15 21:47:07.192938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.534 [2024-07-15 21:47:07.193431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.534 [2024-07-15 21:47:07.193463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.534 [2024-07-15 21:47:07.193478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.534 [2024-07-15 21:47:07.193714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.534 [2024-07-15 21:47:07.193945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.534 [2024-07-15 21:47:07.193962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.534 [2024-07-15 21:47:07.193985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.534 [2024-07-15 21:47:07.197201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.534 [2024-07-15 21:47:07.206423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.534 [2024-07-15 21:47:07.206856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.534 [2024-07-15 21:47:07.206906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.534 [2024-07-15 21:47:07.206920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.534 [2024-07-15 21:47:07.207170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.534 [2024-07-15 21:47:07.207392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.534 [2024-07-15 21:47:07.207409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.534 [2024-07-15 21:47:07.207437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.534 [2024-07-15 21:47:07.210641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.534 [2024-07-15 21:47:07.219895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.534 [2024-07-15 21:47:07.220307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.534 [2024-07-15 21:47:07.220355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.534 [2024-07-15 21:47:07.220370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.534 [2024-07-15 21:47:07.220594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.534 [2024-07-15 21:47:07.220825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.534 [2024-07-15 21:47:07.220842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.534 [2024-07-15 21:47:07.220866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.534 [2024-07-15 21:47:07.224000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.534 [2024-07-15 21:47:07.233361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.534 [2024-07-15 21:47:07.233691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.534 [2024-07-15 21:47:07.233716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.534 [2024-07-15 21:47:07.233741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.534 [2024-07-15 21:47:07.233973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.534 [2024-07-15 21:47:07.234227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.534 [2024-07-15 21:47:07.234256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.534 [2024-07-15 21:47:07.234269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.534 [2024-07-15 21:47:07.237390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.534 [2024-07-15 21:47:07.246732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.534 [2024-07-15 21:47:07.247066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.534 [2024-07-15 21:47:07.247152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.534 [2024-07-15 21:47:07.247167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.534 [2024-07-15 21:47:07.247370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.534 [2024-07-15 21:47:07.247573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.534 [2024-07-15 21:47:07.247590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.534 [2024-07-15 21:47:07.247601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.534 [2024-07-15 21:47:07.250690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.534 [2024-07-15 21:47:07.260166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.534 [2024-07-15 21:47:07.260474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.534 [2024-07-15 21:47:07.260510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.534 [2024-07-15 21:47:07.260524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.534 [2024-07-15 21:47:07.260727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.534 [2024-07-15 21:47:07.260937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.534 [2024-07-15 21:47:07.260956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.534 [2024-07-15 21:47:07.260967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.534 [2024-07-15 21:47:07.264085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.534 [2024-07-15 21:47:07.273644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.534 [2024-07-15 21:47:07.274060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.534 [2024-07-15 21:47:07.274112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.535 [2024-07-15 21:47:07.274126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.535 [2024-07-15 21:47:07.274377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.535 [2024-07-15 21:47:07.274581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.535 [2024-07-15 21:47:07.274598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.535 [2024-07-15 21:47:07.274610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.535 [2024-07-15 21:47:07.277726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.535 [2024-07-15 21:47:07.287098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.535 [2024-07-15 21:47:07.287494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.535 [2024-07-15 21:47:07.287527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.535 [2024-07-15 21:47:07.287541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.535 [2024-07-15 21:47:07.287743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.535 [2024-07-15 21:47:07.287947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.535 [2024-07-15 21:47:07.287963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.535 [2024-07-15 21:47:07.287975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.535 [2024-07-15 21:47:07.291068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.535 [2024-07-15 21:47:07.300417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.535 [2024-07-15 21:47:07.300724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.535 [2024-07-15 21:47:07.300803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.535 [2024-07-15 21:47:07.300817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.535 [2024-07-15 21:47:07.301039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.535 [2024-07-15 21:47:07.301295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.535 [2024-07-15 21:47:07.301312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.535 [2024-07-15 21:47:07.301336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.535 [2024-07-15 21:47:07.304442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.794 [2024-07-15 21:47:07.314268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.794 [2024-07-15 21:47:07.314658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.794 [2024-07-15 21:47:07.314712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.794 [2024-07-15 21:47:07.314728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.794 [2024-07-15 21:47:07.314957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.794 [2024-07-15 21:47:07.315193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.794 [2024-07-15 21:47:07.315212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.794 [2024-07-15 21:47:07.315226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.794 [2024-07-15 21:47:07.318608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.794 [2024-07-15 21:47:07.327983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.794 [2024-07-15 21:47:07.328422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.794 [2024-07-15 21:47:07.328476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.794 [2024-07-15 21:47:07.328490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.794 [2024-07-15 21:47:07.328732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.794 [2024-07-15 21:47:07.328973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.794 [2024-07-15 21:47:07.328990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.794 [2024-07-15 21:47:07.329015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.794 [2024-07-15 21:47:07.332343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.794 [2024-07-15 21:47:07.341480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.794 [2024-07-15 21:47:07.341883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.794 [2024-07-15 21:47:07.341934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.794 [2024-07-15 21:47:07.341948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.794 [2024-07-15 21:47:07.342187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.794 [2024-07-15 21:47:07.342423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.794 [2024-07-15 21:47:07.342440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.794 [2024-07-15 21:47:07.342468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.794 [2024-07-15 21:47:07.345571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.794 [2024-07-15 21:47:07.355107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.794 [2024-07-15 21:47:07.355580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.794 [2024-07-15 21:47:07.355622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.794 [2024-07-15 21:47:07.355636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.794 [2024-07-15 21:47:07.355867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.794 [2024-07-15 21:47:07.356120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.794 [2024-07-15 21:47:07.356160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.794 [2024-07-15 21:47:07.356174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.794 [2024-07-15 21:47:07.359431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.794 [2024-07-15 21:47:07.368633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.794 [2024-07-15 21:47:07.369032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.794 [2024-07-15 21:47:07.369078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.794 [2024-07-15 21:47:07.369094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.794 [2024-07-15 21:47:07.369359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.794 [2024-07-15 21:47:07.369612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.794 [2024-07-15 21:47:07.369630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.794 [2024-07-15 21:47:07.369643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.794 [2024-07-15 21:47:07.373264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.794 [2024-07-15 21:47:07.382453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.794 [2024-07-15 21:47:07.382896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.794 [2024-07-15 21:47:07.382950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.794 [2024-07-15 21:47:07.382967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.794 [2024-07-15 21:47:07.383204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.794 [2024-07-15 21:47:07.383429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.794 [2024-07-15 21:47:07.383448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.794 [2024-07-15 21:47:07.383461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.794 [2024-07-15 21:47:07.386838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.794 [2024-07-15 21:47:07.396218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.794 [2024-07-15 21:47:07.396619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.794 [2024-07-15 21:47:07.396676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.794 [2024-07-15 21:47:07.396693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.794 [2024-07-15 21:47:07.396920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.794 [2024-07-15 21:47:07.397155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.794 [2024-07-15 21:47:07.397174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.794 [2024-07-15 21:47:07.397187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.794 [2024-07-15 21:47:07.400562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.794 [2024-07-15 21:47:07.409798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.794 [2024-07-15 21:47:07.410176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.794 [2024-07-15 21:47:07.410245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.794 [2024-07-15 21:47:07.410260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.794 [2024-07-15 21:47:07.410491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.794 [2024-07-15 21:47:07.410731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.794 [2024-07-15 21:47:07.410749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.794 [2024-07-15 21:47:07.410774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.795 [2024-07-15 21:47:07.413919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.795 [2024-07-15 21:47:07.423231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.795 [2024-07-15 21:47:07.423667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.795 [2024-07-15 21:47:07.423708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.795 [2024-07-15 21:47:07.423725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.795 [2024-07-15 21:47:07.423948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.795 [2024-07-15 21:47:07.424215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.795 [2024-07-15 21:47:07.424233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.795 [2024-07-15 21:47:07.424245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.795 [2024-07-15 21:47:07.427334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.795 [2024-07-15 21:47:07.436622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.795 [2024-07-15 21:47:07.437072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.795 [2024-07-15 21:47:07.437103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.795 [2024-07-15 21:47:07.437118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.795 [2024-07-15 21:47:07.437359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.795 [2024-07-15 21:47:07.437569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.795 [2024-07-15 21:47:07.437587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.795 [2024-07-15 21:47:07.437599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.795 [2024-07-15 21:47:07.440689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.795 [2024-07-15 21:47:07.449978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.795 [2024-07-15 21:47:07.450488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.795 [2024-07-15 21:47:07.450520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.795 [2024-07-15 21:47:07.450536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.795 [2024-07-15 21:47:07.450759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.795 [2024-07-15 21:47:07.450991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.795 [2024-07-15 21:47:07.451008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.795 [2024-07-15 21:47:07.451032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.795 [2024-07-15 21:47:07.454158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.795 [2024-07-15 21:47:07.463479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.795 [2024-07-15 21:47:07.463915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.795 [2024-07-15 21:47:07.463966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.795 [2024-07-15 21:47:07.463979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.795 [2024-07-15 21:47:07.464218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.795 [2024-07-15 21:47:07.464449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.795 [2024-07-15 21:47:07.464466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.795 [2024-07-15 21:47:07.464489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.795 [2024-07-15 21:47:07.467619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.795 [2024-07-15 21:47:07.476954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.795 [2024-07-15 21:47:07.477410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.795 [2024-07-15 21:47:07.477460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.795 [2024-07-15 21:47:07.477474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.795 [2024-07-15 21:47:07.477705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.795 [2024-07-15 21:47:07.477936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.795 [2024-07-15 21:47:07.477952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.795 [2024-07-15 21:47:07.477975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.795 [2024-07-15 21:47:07.481082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.795 [2024-07-15 21:47:07.490463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.795 [2024-07-15 21:47:07.490916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.795 [2024-07-15 21:47:07.490988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.795 [2024-07-15 21:47:07.491003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.795 [2024-07-15 21:47:07.491249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.795 [2024-07-15 21:47:07.491455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.795 [2024-07-15 21:47:07.491472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.795 [2024-07-15 21:47:07.491484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.795 [2024-07-15 21:47:07.494577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.795 [2024-07-15 21:47:07.503864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.795 [2024-07-15 21:47:07.504281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.795 [2024-07-15 21:47:07.504332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.795 [2024-07-15 21:47:07.504347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.795 [2024-07-15 21:47:07.504554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.795 [2024-07-15 21:47:07.504759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.795 [2024-07-15 21:47:07.504776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.795 [2024-07-15 21:47:07.504788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.795 [2024-07-15 21:47:07.507881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.795 [2024-07-15 21:47:07.517364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.795 [2024-07-15 21:47:07.517807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.795 [2024-07-15 21:47:07.517857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.795 [2024-07-15 21:47:07.517872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.795 [2024-07-15 21:47:07.518080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.795 [2024-07-15 21:47:07.518323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.795 [2024-07-15 21:47:07.518341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.795 [2024-07-15 21:47:07.518354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.795 [2024-07-15 21:47:07.521442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.795 [2024-07-15 21:47:07.530731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.795 [2024-07-15 21:47:07.531179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.795 [2024-07-15 21:47:07.531227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.795 [2024-07-15 21:47:07.531247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.795 [2024-07-15 21:47:07.531483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.795 [2024-07-15 21:47:07.531714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.795 [2024-07-15 21:47:07.531731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.795 [2024-07-15 21:47:07.531754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.795 [2024-07-15 21:47:07.534865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.795 [2024-07-15 21:47:07.544187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.795 [2024-07-15 21:47:07.544689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.795 [2024-07-15 21:47:07.544736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.795 [2024-07-15 21:47:07.544751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.795 [2024-07-15 21:47:07.544959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.795 [2024-07-15 21:47:07.545175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.795 [2024-07-15 21:47:07.545192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.795 [2024-07-15 21:47:07.545204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.795 [2024-07-15 21:47:07.548291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.795 [2024-07-15 21:47:07.557584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.795 [2024-07-15 21:47:07.558173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.795 [2024-07-15 21:47:07.558206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.795 [2024-07-15 21:47:07.558232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.795 [2024-07-15 21:47:07.558440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.796 [2024-07-15 21:47:07.558644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.796 [2024-07-15 21:47:07.558661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.796 [2024-07-15 21:47:07.558673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.796 [2024-07-15 21:47:07.561773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.796 [2024-07-15 21:47:07.571065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.796 [2024-07-15 21:47:07.571598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.796 [2024-07-15 21:47:07.571641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.796 [2024-07-15 21:47:07.571656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.796 [2024-07-15 21:47:07.571864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.796 [2024-07-15 21:47:07.572068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.796 [2024-07-15 21:47:07.572089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.796 [2024-07-15 21:47:07.572102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.796 [2024-07-15 21:47:07.575226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.796 [2024-07-15 21:47:07.584894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.796 [2024-07-15 21:47:07.585454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.796 [2024-07-15 21:47:07.585486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:16.796 [2024-07-15 21:47:07.585501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:16.796 [2024-07-15 21:47:07.585749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:16.796 [2024-07-15 21:47:07.585995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.796 [2024-07-15 21:47:07.586014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.796 [2024-07-15 21:47:07.586027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.056 [2024-07-15 21:47:07.589416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.056 [2024-07-15 21:47:07.598587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.056 [2024-07-15 21:47:07.598983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.056 [2024-07-15 21:47:07.599035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.056 [2024-07-15 21:47:07.599052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.056 [2024-07-15 21:47:07.599294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.056 [2024-07-15 21:47:07.599519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.056 [2024-07-15 21:47:07.599537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.056 [2024-07-15 21:47:07.599550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.056 [2024-07-15 21:47:07.602928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.056 [2024-07-15 21:47:07.612243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.056 [2024-07-15 21:47:07.612749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.056 [2024-07-15 21:47:07.612784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.056 [2024-07-15 21:47:07.612800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.056 [2024-07-15 21:47:07.613027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.056 [2024-07-15 21:47:07.613270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.056 [2024-07-15 21:47:07.613288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.056 [2024-07-15 21:47:07.613313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.056 [2024-07-15 21:47:07.616609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.056 [2024-07-15 21:47:07.625615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.056 [2024-07-15 21:47:07.626084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.056 [2024-07-15 21:47:07.626119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.056 [2024-07-15 21:47:07.626136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.056 [2024-07-15 21:47:07.626398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.056 [2024-07-15 21:47:07.626646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.056 [2024-07-15 21:47:07.626676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.056 [2024-07-15 21:47:07.626688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.056 [2024-07-15 21:47:07.630391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.056 [2024-07-15 21:47:07.639403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.056 [2024-07-15 21:47:07.639882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.056 [2024-07-15 21:47:07.639934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.056 [2024-07-15 21:47:07.639950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.056 [2024-07-15 21:47:07.640188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.056 [2024-07-15 21:47:07.640413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.056 [2024-07-15 21:47:07.640431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.056 [2024-07-15 21:47:07.640445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.056 [2024-07-15 21:47:07.643836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.056 [2024-07-15 21:47:07.653072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.056 [2024-07-15 21:47:07.653610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.056 [2024-07-15 21:47:07.653643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.056 [2024-07-15 21:47:07.653658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.056 [2024-07-15 21:47:07.653894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.056 [2024-07-15 21:47:07.654126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.056 [2024-07-15 21:47:07.654152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.056 [2024-07-15 21:47:07.654177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.056 [2024-07-15 21:47:07.657287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.056 [2024-07-15 21:47:07.666432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.056 [2024-07-15 21:47:07.666979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.056 [2024-07-15 21:47:07.667026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.056 [2024-07-15 21:47:07.667041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.056 [2024-07-15 21:47:07.667290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.056 [2024-07-15 21:47:07.667496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.056 [2024-07-15 21:47:07.667512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.056 [2024-07-15 21:47:07.667524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.056 [2024-07-15 21:47:07.670615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.056 [2024-07-15 21:47:07.679900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.056 [2024-07-15 21:47:07.680412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.056 [2024-07-15 21:47:07.680457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.056 [2024-07-15 21:47:07.680472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.056 [2024-07-15 21:47:07.680679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.056 [2024-07-15 21:47:07.680884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.056 [2024-07-15 21:47:07.680901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.057 [2024-07-15 21:47:07.680913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.057 [2024-07-15 21:47:07.684057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.057 [2024-07-15 21:47:07.693285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.057 [2024-07-15 21:47:07.693689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.057 [2024-07-15 21:47:07.693720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.057 [2024-07-15 21:47:07.693735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.057 [2024-07-15 21:47:07.693971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.057 [2024-07-15 21:47:07.694216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.057 [2024-07-15 21:47:07.694235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.057 [2024-07-15 21:47:07.694259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.057 [2024-07-15 21:47:07.697366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.057 [2024-07-15 21:47:07.706683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.057 [2024-07-15 21:47:07.707020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.057 [2024-07-15 21:47:07.707107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.057 [2024-07-15 21:47:07.707122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.057 [2024-07-15 21:47:07.707362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.057 [2024-07-15 21:47:07.707567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.057 [2024-07-15 21:47:07.707583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.057 [2024-07-15 21:47:07.707599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.057 [2024-07-15 21:47:07.710684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.057 [2024-07-15 21:47:07.720152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.057 [2024-07-15 21:47:07.720652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.057 [2024-07-15 21:47:07.720698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.057 [2024-07-15 21:47:07.720713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.057 [2024-07-15 21:47:07.720920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.057 [2024-07-15 21:47:07.721125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.057 [2024-07-15 21:47:07.721151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.057 [2024-07-15 21:47:07.721176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.057 [2024-07-15 21:47:07.724279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.057 [2024-07-15 21:47:07.733565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.057 [2024-07-15 21:47:07.733977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.057 [2024-07-15 21:47:07.734029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.057 [2024-07-15 21:47:07.734043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.057 [2024-07-15 21:47:07.734279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.057 [2024-07-15 21:47:07.734484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.057 [2024-07-15 21:47:07.734501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.057 [2024-07-15 21:47:07.734513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.057 [2024-07-15 21:47:07.737679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.057 [2024-07-15 21:47:07.747142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.057 [2024-07-15 21:47:07.747527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.057 [2024-07-15 21:47:07.747609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.057 [2024-07-15 21:47:07.747623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.057 [2024-07-15 21:47:07.747825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.057 [2024-07-15 21:47:07.748029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.057 [2024-07-15 21:47:07.748046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.057 [2024-07-15 21:47:07.748058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.057 [2024-07-15 21:47:07.751377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.057 [2024-07-15 21:47:07.760493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.057 [2024-07-15 21:47:07.760993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.057 [2024-07-15 21:47:07.761049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.057 [2024-07-15 21:47:07.761064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.057 [2024-07-15 21:47:07.761336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.057 [2024-07-15 21:47:07.761550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.057 [2024-07-15 21:47:07.761569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.057 [2024-07-15 21:47:07.761582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.057 [2024-07-15 21:47:07.764722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.057 [2024-07-15 21:47:07.773856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.057 [2024-07-15 21:47:07.774307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.057 [2024-07-15 21:47:07.774339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.057 [2024-07-15 21:47:07.774355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.057 [2024-07-15 21:47:07.774579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.057 [2024-07-15 21:47:07.774810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.057 [2024-07-15 21:47:07.774827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.057 [2024-07-15 21:47:07.774851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.057 [2024-07-15 21:47:07.777968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.057 [2024-07-15 21:47:07.787290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.057 [2024-07-15 21:47:07.787624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.057 [2024-07-15 21:47:07.787714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.057 [2024-07-15 21:47:07.787728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.057 [2024-07-15 21:47:07.787946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.057 [2024-07-15 21:47:07.788198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.057 [2024-07-15 21:47:07.788234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.057 [2024-07-15 21:47:07.788246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.057 [2024-07-15 21:47:07.791364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.057 [2024-07-15 21:47:07.800838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.057 [2024-07-15 21:47:07.801310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.058 [2024-07-15 21:47:07.801356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.058 [2024-07-15 21:47:07.801372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.058 [2024-07-15 21:47:07.801631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.058 [2024-07-15 21:47:07.801868] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.058 [2024-07-15 21:47:07.801885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.058 [2024-07-15 21:47:07.801910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.058 [2024-07-15 21:47:07.805108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.058 [2024-07-15 21:47:07.814634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.058 [2024-07-15 21:47:07.815070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.058 [2024-07-15 21:47:07.815121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.058 [2024-07-15 21:47:07.815156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.058 [2024-07-15 21:47:07.815398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.058 [2024-07-15 21:47:07.815658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.058 [2024-07-15 21:47:07.815675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.058 [2024-07-15 21:47:07.815700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.058 [2024-07-15 21:47:07.818891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.058 [2024-07-15 21:47:07.828158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.058 [2024-07-15 21:47:07.828700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.058 [2024-07-15 21:47:07.828732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.058 [2024-07-15 21:47:07.828747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.058 [2024-07-15 21:47:07.828983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.058 [2024-07-15 21:47:07.829224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.058 [2024-07-15 21:47:07.829242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.058 [2024-07-15 21:47:07.829266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.058 [2024-07-15 21:47:07.832397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.058 [2024-07-15 21:47:07.841698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.058 [2024-07-15 21:47:07.842204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.058 [2024-07-15 21:47:07.842236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.058 [2024-07-15 21:47:07.842251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.058 [2024-07-15 21:47:07.842487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.058 [2024-07-15 21:47:07.842719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.058 [2024-07-15 21:47:07.842736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.058 [2024-07-15 21:47:07.842759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.058 [2024-07-15 21:47:07.846410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.319 [2024-07-15 21:47:07.855513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.319 [2024-07-15 21:47:07.855852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.319 [2024-07-15 21:47:07.855890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.319 [2024-07-15 21:47:07.855904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.319 [2024-07-15 21:47:07.856156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.319 [2024-07-15 21:47:07.856399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.319 [2024-07-15 21:47:07.856416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.319 [2024-07-15 21:47:07.856441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.319 [2024-07-15 21:47:07.859729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.319 [2024-07-15 21:47:07.869182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.319 [2024-07-15 21:47:07.869562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.319 [2024-07-15 21:47:07.869600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.319 [2024-07-15 21:47:07.869631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.319 [2024-07-15 21:47:07.869860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.319 [2024-07-15 21:47:07.870101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.319 [2024-07-15 21:47:07.870131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.319 [2024-07-15 21:47:07.870153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.319 [2024-07-15 21:47:07.873455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.319 [2024-07-15 21:47:07.882765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.319 [2024-07-15 21:47:07.883183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.319 [2024-07-15 21:47:07.883227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.319 [2024-07-15 21:47:07.883242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.319 [2024-07-15 21:47:07.883469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.319 [2024-07-15 21:47:07.883703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.319 [2024-07-15 21:47:07.883722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.319 [2024-07-15 21:47:07.883736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.319 [2024-07-15 21:47:07.887511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.319 [2024-07-15 21:47:07.896614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.319 [2024-07-15 21:47:07.896969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.319 [2024-07-15 21:47:07.896994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.319 [2024-07-15 21:47:07.897014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.319 [2024-07-15 21:47:07.897272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.319 [2024-07-15 21:47:07.897497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.319 [2024-07-15 21:47:07.897515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.319 [2024-07-15 21:47:07.897528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.319 [2024-07-15 21:47:07.901006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.319 [2024-07-15 21:47:07.910214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.319 [2024-07-15 21:47:07.910590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.319 [2024-07-15 21:47:07.910643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.319 [2024-07-15 21:47:07.910658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.319 [2024-07-15 21:47:07.910905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.319 [2024-07-15 21:47:07.911174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.319 [2024-07-15 21:47:07.911204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.319 [2024-07-15 21:47:07.911217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.319 [2024-07-15 21:47:07.914597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.319 [2024-07-15 21:47:07.923972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.319 [2024-07-15 21:47:07.924472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.319 [2024-07-15 21:47:07.924524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.320 [2024-07-15 21:47:07.924539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.320 [2024-07-15 21:47:07.924786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.320 [2024-07-15 21:47:07.925027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.320 [2024-07-15 21:47:07.925045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.320 [2024-07-15 21:47:07.925070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.320 [2024-07-15 21:47:07.928285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.320 [2024-07-15 21:47:07.937610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.320 [2024-07-15 21:47:07.938054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.320 [2024-07-15 21:47:07.938118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.320 [2024-07-15 21:47:07.938133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.320 [2024-07-15 21:47:07.938372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.320 [2024-07-15 21:47:07.938614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.320 [2024-07-15 21:47:07.938636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.320 [2024-07-15 21:47:07.938661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.320 [2024-07-15 21:47:07.941886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.320 [2024-07-15 21:47:07.951078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.320 [2024-07-15 21:47:07.951487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.320 [2024-07-15 21:47:07.951539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.320 [2024-07-15 21:47:07.951555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.320 [2024-07-15 21:47:07.951791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.320 [2024-07-15 21:47:07.952022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.320 [2024-07-15 21:47:07.952040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.320 [2024-07-15 21:47:07.952063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.320 [2024-07-15 21:47:07.955216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.320 [2024-07-15 21:47:07.964593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.320 [2024-07-15 21:47:07.964992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.320 [2024-07-15 21:47:07.965077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.320 [2024-07-15 21:47:07.965091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.320 [2024-07-15 21:47:07.965344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.320 [2024-07-15 21:47:07.965576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.320 [2024-07-15 21:47:07.965593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.320 [2024-07-15 21:47:07.965617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.320 [2024-07-15 21:47:07.968718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.320 [2024-07-15 21:47:07.978117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.320 [2024-07-15 21:47:07.978601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.320 [2024-07-15 21:47:07.978633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.320 [2024-07-15 21:47:07.978648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.320 [2024-07-15 21:47:07.978884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.320 [2024-07-15 21:47:07.979115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.320 [2024-07-15 21:47:07.979133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.320 [2024-07-15 21:47:07.979168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.320 [2024-07-15 21:47:07.982304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.320 [2024-07-15 21:47:07.991478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.320 [2024-07-15 21:47:07.991905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.320 [2024-07-15 21:47:07.991956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.320 [2024-07-15 21:47:07.991971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.320 [2024-07-15 21:47:07.992231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.320 [2024-07-15 21:47:07.992453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.320 [2024-07-15 21:47:07.992470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.320 [2024-07-15 21:47:07.992494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.320 [2024-07-15 21:47:07.995604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.320 [2024-07-15 21:47:08.004981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.320 [2024-07-15 21:47:08.005556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.320 [2024-07-15 21:47:08.005587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.320 [2024-07-15 21:47:08.005614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.320 [2024-07-15 21:47:08.005839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.320 [2024-07-15 21:47:08.006071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.320 [2024-07-15 21:47:08.006087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.320 [2024-07-15 21:47:08.006111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.320 [2024-07-15 21:47:08.009257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.320 [2024-07-15 21:47:08.018419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.320 [2024-07-15 21:47:08.018887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.320 [2024-07-15 21:47:08.018939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.320 [2024-07-15 21:47:08.018954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.320 [2024-07-15 21:47:08.019213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.320 [2024-07-15 21:47:08.019435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.320 [2024-07-15 21:47:08.019452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.320 [2024-07-15 21:47:08.019475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.320 [2024-07-15 21:47:08.022584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.320 [2024-07-15 21:47:08.031759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.320 [2024-07-15 21:47:08.032098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.320 [2024-07-15 21:47:08.032133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.320 [2024-07-15 21:47:08.032156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.320 [2024-07-15 21:47:08.032407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.320 [2024-07-15 21:47:08.032639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.320 [2024-07-15 21:47:08.032656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.320 [2024-07-15 21:47:08.032679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.320 [2024-07-15 21:47:08.035792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.320 [2024-07-15 21:47:08.045159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.320 [2024-07-15 21:47:08.045492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.320 [2024-07-15 21:47:08.045582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.320 [2024-07-15 21:47:08.045597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.320 [2024-07-15 21:47:08.045830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.320 [2024-07-15 21:47:08.046061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.320 [2024-07-15 21:47:08.046077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.320 [2024-07-15 21:47:08.046102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.320 [2024-07-15 21:47:08.049240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.320 [2024-07-15 21:47:08.058573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.320 [2024-07-15 21:47:08.058999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.321 [2024-07-15 21:47:08.059035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.321 [2024-07-15 21:47:08.059061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.321 [2024-07-15 21:47:08.059317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.321 [2024-07-15 21:47:08.059548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.321 [2024-07-15 21:47:08.059564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.321 [2024-07-15 21:47:08.059588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.321 [2024-07-15 21:47:08.062700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.321 [2024-07-15 21:47:08.071927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.321 [2024-07-15 21:47:08.072371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.321 [2024-07-15 21:47:08.072405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.321 [2024-07-15 21:47:08.072431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.321 [2024-07-15 21:47:08.072650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.321 [2024-07-15 21:47:08.072880] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.321 [2024-07-15 21:47:08.072897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.321 [2024-07-15 21:47:08.072925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.321 [2024-07-15 21:47:08.076083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.321 [2024-07-15 21:47:08.085310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.321 [2024-07-15 21:47:08.085798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.321 [2024-07-15 21:47:08.085831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.321 [2024-07-15 21:47:08.085847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.321 [2024-07-15 21:47:08.086070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.321 [2024-07-15 21:47:08.086329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.321 [2024-07-15 21:47:08.086347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.321 [2024-07-15 21:47:08.086371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.321 [2024-07-15 21:47:08.089475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.321 [2024-07-15 21:47:08.098678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.321 [2024-07-15 21:47:08.099176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.321 [2024-07-15 21:47:08.099207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.321 [2024-07-15 21:47:08.099233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.321 [2024-07-15 21:47:08.099458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.321 [2024-07-15 21:47:08.099689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.321 [2024-07-15 21:47:08.099707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.321 [2024-07-15 21:47:08.099730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.321 [2024-07-15 21:47:08.102865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.615 [2024-07-15 21:47:08.112344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.615 [2024-07-15 21:47:08.112739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.615 [2024-07-15 21:47:08.112791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.615 [2024-07-15 21:47:08.112807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.615 [2024-07-15 21:47:08.113034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.615 [2024-07-15 21:47:08.113269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.615 [2024-07-15 21:47:08.113288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.615 [2024-07-15 21:47:08.113302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.615 [2024-07-15 21:47:08.116682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.615 [2024-07-15 21:47:08.126063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.615 [2024-07-15 21:47:08.126441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.615 [2024-07-15 21:47:08.126503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.615 [2024-07-15 21:47:08.126520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.615 [2024-07-15 21:47:08.126742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.615 [2024-07-15 21:47:08.126967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.615 [2024-07-15 21:47:08.126984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.615 [2024-07-15 21:47:08.126997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.615 [2024-07-15 21:47:08.130379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.615 [2024-07-15 21:47:08.139753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.615 [2024-07-15 21:47:08.140127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.615 [2024-07-15 21:47:08.140191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.615 [2024-07-15 21:47:08.140206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.615 [2024-07-15 21:47:08.140427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.615 [2024-07-15 21:47:08.140649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.615 [2024-07-15 21:47:08.140667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.615 [2024-07-15 21:47:08.140680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.615 [2024-07-15 21:47:08.144217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.615 [2024-07-15 21:47:08.153426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.615 [2024-07-15 21:47:08.153724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.615 [2024-07-15 21:47:08.153750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.615 [2024-07-15 21:47:08.153765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.615 [2024-07-15 21:47:08.153986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.615 [2024-07-15 21:47:08.154218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.615 [2024-07-15 21:47:08.154236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.615 [2024-07-15 21:47:08.154249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.615 [2024-07-15 21:47:08.157638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.615 [2024-07-15 21:47:08.167146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.615 [2024-07-15 21:47:08.167478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.615 [2024-07-15 21:47:08.167570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.615 [2024-07-15 21:47:08.167585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.615 [2024-07-15 21:47:08.167817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.615 [2024-07-15 21:47:08.168054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.615 [2024-07-15 21:47:08.168071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.615 [2024-07-15 21:47:08.168095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.615 [2024-07-15 21:47:08.171311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.616 [2024-07-15 21:47:08.180622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.616 [2024-07-15 21:47:08.181054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.616 [2024-07-15 21:47:08.181106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.616 [2024-07-15 21:47:08.181120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.616 [2024-07-15 21:47:08.181358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.616 [2024-07-15 21:47:08.181590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.616 [2024-07-15 21:47:08.181606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.616 [2024-07-15 21:47:08.181630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.616 [2024-07-15 21:47:08.184731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.616 [2024-07-15 21:47:08.194031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.616 [2024-07-15 21:47:08.194464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.616 [2024-07-15 21:47:08.194514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.616 [2024-07-15 21:47:08.194527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.616 [2024-07-15 21:47:08.194757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.616 [2024-07-15 21:47:08.194979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.616 [2024-07-15 21:47:08.194996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.616 [2024-07-15 21:47:08.195009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.616 [2024-07-15 21:47:08.198164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.616 [2024-07-15 21:47:08.207479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.616 [2024-07-15 21:47:08.207988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.616 [2024-07-15 21:47:08.208035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.616 [2024-07-15 21:47:08.208050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.616 [2024-07-15 21:47:08.208295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.616 [2024-07-15 21:47:08.208528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.616 [2024-07-15 21:47:08.208545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.616 [2024-07-15 21:47:08.208568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.616 [2024-07-15 21:47:08.211675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.616 [2024-07-15 21:47:08.221010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.616 [2024-07-15 21:47:08.221562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.616 [2024-07-15 21:47:08.221614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.616 [2024-07-15 21:47:08.221630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.616 [2024-07-15 21:47:08.221865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.616 [2024-07-15 21:47:08.222096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.616 [2024-07-15 21:47:08.222113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.616 [2024-07-15 21:47:08.222147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.616 [2024-07-15 21:47:08.225263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.616 [2024-07-15 21:47:08.234397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.616 [2024-07-15 21:47:08.234892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.616 [2024-07-15 21:47:08.234922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.616 [2024-07-15 21:47:08.234949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.616 [2024-07-15 21:47:08.235183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.616 [2024-07-15 21:47:08.235415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.616 [2024-07-15 21:47:08.235432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.616 [2024-07-15 21:47:08.235456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.616 [2024-07-15 21:47:08.238563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.616 [2024-07-15 21:47:08.247882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.616 [2024-07-15 21:47:08.248291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.616 [2024-07-15 21:47:08.248340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.616 [2024-07-15 21:47:08.248356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.616 [2024-07-15 21:47:08.248580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.616 [2024-07-15 21:47:08.248811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.616 [2024-07-15 21:47:08.248828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.616 [2024-07-15 21:47:08.248851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.616 [2024-07-15 21:47:08.251966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.616 [2024-07-15 21:47:08.261293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.616 [2024-07-15 21:47:08.261679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.616 [2024-07-15 21:47:08.261731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.616 [2024-07-15 21:47:08.261751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.616 [2024-07-15 21:47:08.261986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.616 [2024-07-15 21:47:08.262229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.616 [2024-07-15 21:47:08.262250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.616 [2024-07-15 21:47:08.262281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.616 [2024-07-15 21:47:08.265392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.616 [2024-07-15 21:47:08.274726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.616 [2024-07-15 21:47:08.275219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.616 [2024-07-15 21:47:08.275262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.616 [2024-07-15 21:47:08.275278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.616 [2024-07-15 21:47:08.275513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.616 [2024-07-15 21:47:08.275744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.616 [2024-07-15 21:47:08.275762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.616 [2024-07-15 21:47:08.275785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.616 [2024-07-15 21:47:08.278896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.616 [2024-07-15 21:47:08.288221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.616 [2024-07-15 21:47:08.288699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.616 [2024-07-15 21:47:08.288746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.616 [2024-07-15 21:47:08.288761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.616 [2024-07-15 21:47:08.288997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.616 [2024-07-15 21:47:08.289239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.616 [2024-07-15 21:47:08.289256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.616 [2024-07-15 21:47:08.289280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.616 [2024-07-15 21:47:08.292385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.616 [2024-07-15 21:47:08.301702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.616 [2024-07-15 21:47:08.302152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.616 [2024-07-15 21:47:08.302200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.616 [2024-07-15 21:47:08.302214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.616 [2024-07-15 21:47:08.302445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.616 [2024-07-15 21:47:08.302676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.616 [2024-07-15 21:47:08.302697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.616 [2024-07-15 21:47:08.302721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.616 [2024-07-15 21:47:08.305824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.616 [2024-07-15 21:47:08.315156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.616 [2024-07-15 21:47:08.315625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.616 [2024-07-15 21:47:08.315677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.616 [2024-07-15 21:47:08.315691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.616 [2024-07-15 21:47:08.315921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.616 [2024-07-15 21:47:08.316174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.616 [2024-07-15 21:47:08.316192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.616 [2024-07-15 21:47:08.316220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.616 [2024-07-15 21:47:08.319342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.617 [2024-07-15 21:47:08.328664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.617 [2024-07-15 21:47:08.329132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.617 [2024-07-15 21:47:08.329189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.617 [2024-07-15 21:47:08.329205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.617 [2024-07-15 21:47:08.329429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.617 [2024-07-15 21:47:08.329661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.617 [2024-07-15 21:47:08.329678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.617 [2024-07-15 21:47:08.329703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.617 [2024-07-15 21:47:08.332815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.617 [2024-07-15 21:47:08.342131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.617 [2024-07-15 21:47:08.342567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.617 [2024-07-15 21:47:08.342614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.617 [2024-07-15 21:47:08.342628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.617 [2024-07-15 21:47:08.342847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.617 [2024-07-15 21:47:08.343078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.617 [2024-07-15 21:47:08.343094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.617 [2024-07-15 21:47:08.343119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.617 [2024-07-15 21:47:08.346230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.617 [2024-07-15 21:47:08.355539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.617 [2024-07-15 21:47:08.356007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.617 [2024-07-15 21:47:08.356058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.617 [2024-07-15 21:47:08.356074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.617 [2024-07-15 21:47:08.356320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.617 [2024-07-15 21:47:08.356552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.617 [2024-07-15 21:47:08.356569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.617 [2024-07-15 21:47:08.356593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.617 [2024-07-15 21:47:08.359745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.617 [2024-07-15 21:47:08.368961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.617 [2024-07-15 21:47:08.369546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.617 [2024-07-15 21:47:08.369593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.617 [2024-07-15 21:47:08.369609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.617 [2024-07-15 21:47:08.369846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.617 [2024-07-15 21:47:08.370078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.617 [2024-07-15 21:47:08.370095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.617 [2024-07-15 21:47:08.370119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.617 [2024-07-15 21:47:08.373369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.617 [2024-07-15 21:47:08.382770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.617 [2024-07-15 21:47:08.383121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.617 [2024-07-15 21:47:08.383176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.617 [2024-07-15 21:47:08.383193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.617 [2024-07-15 21:47:08.383432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.617 [2024-07-15 21:47:08.383656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.617 [2024-07-15 21:47:08.383673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.617 [2024-07-15 21:47:08.383686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.617 [2024-07-15 21:47:08.387126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.617 [2024-07-15 21:47:08.396537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.617 [2024-07-15 21:47:08.396936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.617 [2024-07-15 21:47:08.396970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.617 [2024-07-15 21:47:08.396986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.617 [2024-07-15 21:47:08.397257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.617 [2024-07-15 21:47:08.397513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.617 [2024-07-15 21:47:08.397532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.617 [2024-07-15 21:47:08.397545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.617 [2024-07-15 21:47:08.401100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.875 [2024-07-15 21:47:08.410343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.875 [2024-07-15 21:47:08.410703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.875 [2024-07-15 21:47:08.410771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.875 [2024-07-15 21:47:08.410786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.875 [2024-07-15 21:47:08.411008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.875 [2024-07-15 21:47:08.411240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.875 [2024-07-15 21:47:08.411259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.875 [2024-07-15 21:47:08.411272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.875 [2024-07-15 21:47:08.414769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.875 [2024-07-15 21:47:08.424146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.875 [2024-07-15 21:47:08.424625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.875 [2024-07-15 21:47:08.424660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.875 [2024-07-15 21:47:08.424676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.875 [2024-07-15 21:47:08.424903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.875 [2024-07-15 21:47:08.425127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.875 [2024-07-15 21:47:08.425156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.875 [2024-07-15 21:47:08.425170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.875 [2024-07-15 21:47:08.428550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.876 [2024-07-15 21:47:08.437618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.876 [2024-07-15 21:47:08.438049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.876 [2024-07-15 21:47:08.438099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.876 [2024-07-15 21:47:08.438113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.876 [2024-07-15 21:47:08.438378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.876 [2024-07-15 21:47:08.438611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.876 [2024-07-15 21:47:08.438628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.876 [2024-07-15 21:47:08.438657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.876 [2024-07-15 21:47:08.441760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.876 [2024-07-15 21:47:08.451092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.876 [2024-07-15 21:47:08.451580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.876 [2024-07-15 21:47:08.451612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.876 [2024-07-15 21:47:08.451638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.876 [2024-07-15 21:47:08.451862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.876 [2024-07-15 21:47:08.452093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.876 [2024-07-15 21:47:08.452111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.876 [2024-07-15 21:47:08.452134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.876 [2024-07-15 21:47:08.455296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.876 [2024-07-15 21:47:08.464468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.876 [2024-07-15 21:47:08.464877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.876 [2024-07-15 21:47:08.464929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.876 [2024-07-15 21:47:08.464943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.876 [2024-07-15 21:47:08.465154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.876 [2024-07-15 21:47:08.465384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.876 [2024-07-15 21:47:08.465401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.876 [2024-07-15 21:47:08.465413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.876 [2024-07-15 21:47:08.468523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.876 [2024-07-15 21:47:08.477854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.876 [2024-07-15 21:47:08.478201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.876 [2024-07-15 21:47:08.478232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.876 [2024-07-15 21:47:08.478248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.876 [2024-07-15 21:47:08.478455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.876 [2024-07-15 21:47:08.478660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.876 [2024-07-15 21:47:08.478677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.876 [2024-07-15 21:47:08.478688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.876 [2024-07-15 21:47:08.481817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.876 [2024-07-15 21:47:08.491319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.876 [2024-07-15 21:47:08.491796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.876 [2024-07-15 21:47:08.491851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.876 [2024-07-15 21:47:08.491867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.876 [2024-07-15 21:47:08.492103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.876 [2024-07-15 21:47:08.492359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.876 [2024-07-15 21:47:08.492377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.876 [2024-07-15 21:47:08.492401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.876 [2024-07-15 21:47:08.495524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.876 [2024-07-15 21:47:08.504687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.876 [2024-07-15 21:47:08.505135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.876 [2024-07-15 21:47:08.505174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.876 [2024-07-15 21:47:08.505201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.876 [2024-07-15 21:47:08.505425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.876 [2024-07-15 21:47:08.505658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.876 [2024-07-15 21:47:08.505674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.876 [2024-07-15 21:47:08.505698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.876 [2024-07-15 21:47:08.508837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.876 [2024-07-15 21:47:08.518015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.876 [2024-07-15 21:47:08.518518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.876 [2024-07-15 21:47:08.518569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.876 [2024-07-15 21:47:08.518585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.876 [2024-07-15 21:47:08.518808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.876 [2024-07-15 21:47:08.519040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.876 [2024-07-15 21:47:08.519058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.876 [2024-07-15 21:47:08.519081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.876 [2024-07-15 21:47:08.522226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.876 [2024-07-15 21:47:08.531422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.876 [2024-07-15 21:47:08.531896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.876 [2024-07-15 21:47:08.531946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.876 [2024-07-15 21:47:08.531961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.876 [2024-07-15 21:47:08.532219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.876 [2024-07-15 21:47:08.532446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.876 [2024-07-15 21:47:08.532463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.876 [2024-07-15 21:47:08.532487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.876 [2024-07-15 21:47:08.535602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.876 [2024-07-15 21:47:08.544779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.876 [2024-07-15 21:47:08.545325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.876 [2024-07-15 21:47:08.545357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.876 [2024-07-15 21:47:08.545373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.876 [2024-07-15 21:47:08.545608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.876 [2024-07-15 21:47:08.545840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.876 [2024-07-15 21:47:08.545856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.876 [2024-07-15 21:47:08.545880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.876 [2024-07-15 21:47:08.549016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.876 [2024-07-15 21:47:08.558195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.876 [2024-07-15 21:47:08.558672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.876 [2024-07-15 21:47:08.558718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.876 [2024-07-15 21:47:08.558733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.876 [2024-07-15 21:47:08.558969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.876 [2024-07-15 21:47:08.559235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.876 [2024-07-15 21:47:08.559253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.876 [2024-07-15 21:47:08.559265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.876 [2024-07-15 21:47:08.562381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.876 [2024-07-15 21:47:08.571552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.876 [2024-07-15 21:47:08.572032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.876 [2024-07-15 21:47:08.572080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.876 [2024-07-15 21:47:08.572096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.876 [2024-07-15 21:47:08.572368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.876 [2024-07-15 21:47:08.572600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.876 [2024-07-15 21:47:08.572617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.876 [2024-07-15 21:47:08.572641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.876 [2024-07-15 21:47:08.575751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.877 [2024-07-15 21:47:08.584913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.877 [2024-07-15 21:47:08.585258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.877 [2024-07-15 21:47:08.585338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.877 [2024-07-15 21:47:08.585353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.877 [2024-07-15 21:47:08.585582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.877 [2024-07-15 21:47:08.585813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.877 [2024-07-15 21:47:08.585830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.877 [2024-07-15 21:47:08.585853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.877 [2024-07-15 21:47:08.588995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.877 [2024-07-15 21:47:08.598347] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.877 [2024-07-15 21:47:08.598730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.877 [2024-07-15 21:47:08.598782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.877 [2024-07-15 21:47:08.598796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.877 [2024-07-15 21:47:08.599027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.877 [2024-07-15 21:47:08.599296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.877 [2024-07-15 21:47:08.599313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.877 [2024-07-15 21:47:08.599325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.877 [2024-07-15 21:47:08.602443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.877 [2024-07-15 21:47:08.612145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.877 [2024-07-15 21:47:08.612526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.877 [2024-07-15 21:47:08.612581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.877 [2024-07-15 21:47:08.612595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.877 [2024-07-15 21:47:08.612834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.877 [2024-07-15 21:47:08.613058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.877 [2024-07-15 21:47:08.613076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.877 [2024-07-15 21:47:08.613088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.877 [2024-07-15 21:47:08.616377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.877 [2024-07-15 21:47:08.625662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.877 [2024-07-15 21:47:08.626131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.877 [2024-07-15 21:47:08.626191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.877 [2024-07-15 21:47:08.626209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.877 [2024-07-15 21:47:08.626451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.877 [2024-07-15 21:47:08.626692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.877 [2024-07-15 21:47:08.626709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.877 [2024-07-15 21:47:08.626733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.877 [2024-07-15 21:47:08.629965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.877 [2024-07-15 21:47:08.639202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.877 [2024-07-15 21:47:08.639529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.877 [2024-07-15 21:47:08.639598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.877 [2024-07-15 21:47:08.639632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.877 [2024-07-15 21:47:08.639849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.877 [2024-07-15 21:47:08.640080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.877 [2024-07-15 21:47:08.640097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.877 [2024-07-15 21:47:08.640121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.877 [2024-07-15 21:47:08.643256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.877 [2024-07-15 21:47:08.652604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.877 [2024-07-15 21:47:08.652966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.877 [2024-07-15 21:47:08.653018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.877 [2024-07-15 21:47:08.653033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.877 [2024-07-15 21:47:08.653273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.877 [2024-07-15 21:47:08.653519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.877 [2024-07-15 21:47:08.653537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.877 [2024-07-15 21:47:08.653549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.877 [2024-07-15 21:47:08.657106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.877 [2024-07-15 21:47:08.666320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.877 [2024-07-15 21:47:08.666743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.877 [2024-07-15 21:47:08.666795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:17.877 [2024-07-15 21:47:08.666811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:17.877 [2024-07-15 21:47:08.667031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:17.877 [2024-07-15 21:47:08.667263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.877 [2024-07-15 21:47:08.667287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.877 [2024-07-15 21:47:08.667300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.135 [2024-07-15 21:47:08.670673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.135 [2024-07-15 21:47:08.680048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.135 [2024-07-15 21:47:08.680394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.135 [2024-07-15 21:47:08.680432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.135 [2024-07-15 21:47:08.680447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.135 [2024-07-15 21:47:08.680668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.135 [2024-07-15 21:47:08.680891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.135 [2024-07-15 21:47:08.680910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.135 [2024-07-15 21:47:08.680922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.135 [2024-07-15 21:47:08.684307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.135 [2024-07-15 21:47:08.693865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.135 [2024-07-15 21:47:08.694289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.135 [2024-07-15 21:47:08.694339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.135 [2024-07-15 21:47:08.694355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.135 [2024-07-15 21:47:08.694582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.135 [2024-07-15 21:47:08.694806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.135 [2024-07-15 21:47:08.694824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.135 [2024-07-15 21:47:08.694837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.135 [2024-07-15 21:47:08.698020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.135 [2024-07-15 21:47:08.707215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.136 [2024-07-15 21:47:08.707655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.136 [2024-07-15 21:47:08.707706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.136 [2024-07-15 21:47:08.707720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.136 [2024-07-15 21:47:08.707951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.136 [2024-07-15 21:47:08.708353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.136 [2024-07-15 21:47:08.708372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.136 [2024-07-15 21:47:08.708395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.136 [2024-07-15 21:47:08.711516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.136 [2024-07-15 21:47:08.720576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.136 [2024-07-15 21:47:08.721004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.136 [2024-07-15 21:47:08.721049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.136 [2024-07-15 21:47:08.721064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.136 [2024-07-15 21:47:08.721338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.136 [2024-07-15 21:47:08.721571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.136 [2024-07-15 21:47:08.721588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.136 [2024-07-15 21:47:08.721611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.136 [2024-07-15 21:47:08.724721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.136 [2024-07-15 21:47:08.733945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.136 [2024-07-15 21:47:08.734436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.136 [2024-07-15 21:47:08.734483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.136 [2024-07-15 21:47:08.734498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.136 [2024-07-15 21:47:08.734734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.136 [2024-07-15 21:47:08.734966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.136 [2024-07-15 21:47:08.734983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.136 [2024-07-15 21:47:08.735006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.136 [2024-07-15 21:47:08.738150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.136 [2024-07-15 21:47:08.747317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.136 [2024-07-15 21:47:08.747745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.136 [2024-07-15 21:47:08.747789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.136 [2024-07-15 21:47:08.747804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.136 [2024-07-15 21:47:08.748039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.136 [2024-07-15 21:47:08.748296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.136 [2024-07-15 21:47:08.748314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.136 [2024-07-15 21:47:08.748338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.136 [2024-07-15 21:47:08.751446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.136 [2024-07-15 21:47:08.760795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.136 [2024-07-15 21:47:08.761211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.136 [2024-07-15 21:47:08.761262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.136 [2024-07-15 21:47:08.761277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.136 [2024-07-15 21:47:08.761510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.136 [2024-07-15 21:47:08.761742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.136 [2024-07-15 21:47:08.761759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.136 [2024-07-15 21:47:08.761783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.136 [2024-07-15 21:47:08.764936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.136 [2024-07-15 21:47:08.774321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.136 [2024-07-15 21:47:08.774684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.136 [2024-07-15 21:47:08.774759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.136 [2024-07-15 21:47:08.774774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.136 [2024-07-15 21:47:08.774994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.136 [2024-07-15 21:47:08.775248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.136 [2024-07-15 21:47:08.775277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.136 [2024-07-15 21:47:08.775289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.136 [2024-07-15 21:47:08.778409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.136 [2024-07-15 21:47:08.787748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.136 [2024-07-15 21:47:08.788064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.136 [2024-07-15 21:47:08.788088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.136 [2024-07-15 21:47:08.788112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.136 [2024-07-15 21:47:08.788352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.136 [2024-07-15 21:47:08.788557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.136 [2024-07-15 21:47:08.788573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.136 [2024-07-15 21:47:08.788585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.136 [2024-07-15 21:47:08.791673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.136 [2024-07-15 21:47:08.801172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.136 [2024-07-15 21:47:08.801578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.136 [2024-07-15 21:47:08.801630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.136 [2024-07-15 21:47:08.801644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.136 [2024-07-15 21:47:08.801846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.136 [2024-07-15 21:47:08.802050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.136 [2024-07-15 21:47:08.802066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.136 [2024-07-15 21:47:08.802082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.136 [2024-07-15 21:47:08.805217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.136 [2024-07-15 21:47:08.814737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.136 [2024-07-15 21:47:08.815161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.136 [2024-07-15 21:47:08.815233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.136 [2024-07-15 21:47:08.815247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.136 [2024-07-15 21:47:08.815449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.136 [2024-07-15 21:47:08.815653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.136 [2024-07-15 21:47:08.815669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.136 [2024-07-15 21:47:08.815681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.136 [2024-07-15 21:47:08.818853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.136 [2024-07-15 21:47:08.828255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.136 [2024-07-15 21:47:08.828669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.136 [2024-07-15 21:47:08.828702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.136 [2024-07-15 21:47:08.828729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.136 [2024-07-15 21:47:08.828930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.136 [2024-07-15 21:47:08.829155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.136 [2024-07-15 21:47:08.829173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.136 [2024-07-15 21:47:08.829186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.136 [2024-07-15 21:47:08.832398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.136 [2024-07-15 21:47:08.841713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.136 [2024-07-15 21:47:08.842158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.136 [2024-07-15 21:47:08.842196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.136 [2024-07-15 21:47:08.842210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.136 [2024-07-15 21:47:08.842439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.136 [2024-07-15 21:47:08.842670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.136 [2024-07-15 21:47:08.842687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.136 [2024-07-15 21:47:08.842710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.137 [2024-07-15 21:47:08.845845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.137 [2024-07-15 21:47:08.855201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.137 [2024-07-15 21:47:08.855666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.137 [2024-07-15 21:47:08.855718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.137 [2024-07-15 21:47:08.855744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.137 [2024-07-15 21:47:08.855969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.137 [2024-07-15 21:47:08.856237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.137 [2024-07-15 21:47:08.856255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.137 [2024-07-15 21:47:08.856266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.137 [2024-07-15 21:47:08.859383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.137 [2024-07-15 21:47:08.868553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.137 [2024-07-15 21:47:08.869044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.137 [2024-07-15 21:47:08.869092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.137 [2024-07-15 21:47:08.869107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.137 [2024-07-15 21:47:08.869380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.137 [2024-07-15 21:47:08.869612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.137 [2024-07-15 21:47:08.869630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.137 [2024-07-15 21:47:08.869653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.137 [2024-07-15 21:47:08.872763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.137 [2024-07-15 21:47:08.881922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.137 [2024-07-15 21:47:08.882420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.137 [2024-07-15 21:47:08.882451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.137 [2024-07-15 21:47:08.882479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.137 [2024-07-15 21:47:08.882707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.137 [2024-07-15 21:47:08.882939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.137 [2024-07-15 21:47:08.882957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.137 [2024-07-15 21:47:08.882981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.137 [2024-07-15 21:47:08.886118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.137 [2024-07-15 21:47:08.895480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.137 [2024-07-15 21:47:08.895995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.137 [2024-07-15 21:47:08.896026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.137 [2024-07-15 21:47:08.896042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.137 [2024-07-15 21:47:08.896302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.137 [2024-07-15 21:47:08.896513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.137 [2024-07-15 21:47:08.896531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.137 [2024-07-15 21:47:08.896543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.137 [2024-07-15 21:47:08.899715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.137 [2024-07-15 21:47:08.908896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.137 [2024-07-15 21:47:08.909398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.137 [2024-07-15 21:47:08.909444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.137 [2024-07-15 21:47:08.909460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.137 [2024-07-15 21:47:08.909722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.137 [2024-07-15 21:47:08.909967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.137 [2024-07-15 21:47:08.909985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.137 [2024-07-15 21:47:08.909997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.137 [2024-07-15 21:47:08.913622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.137 [2024-07-15 21:47:08.922685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.137 [2024-07-15 21:47:08.923022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.137 [2024-07-15 21:47:08.923076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.137 [2024-07-15 21:47:08.923120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.137 [2024-07-15 21:47:08.923348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.137 [2024-07-15 21:47:08.923572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.137 [2024-07-15 21:47:08.923590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.137 [2024-07-15 21:47:08.923603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.137 [2024-07-15 21:47:08.927161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.396 [2024-07-15 21:47:08.936477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.396 [2024-07-15 21:47:08.936774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.396 [2024-07-15 21:47:08.936799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.396 [2024-07-15 21:47:08.936814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.396 [2024-07-15 21:47:08.937034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.396 [2024-07-15 21:47:08.937266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.396 [2024-07-15 21:47:08.937285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.396 [2024-07-15 21:47:08.937298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.396 [2024-07-15 21:47:08.940679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.396 [2024-07-15 21:47:08.950251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.396 [2024-07-15 21:47:08.950546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.396 [2024-07-15 21:47:08.950570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.396 [2024-07-15 21:47:08.950585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.396 [2024-07-15 21:47:08.950806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.396 [2024-07-15 21:47:08.951029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.396 [2024-07-15 21:47:08.951047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.396 [2024-07-15 21:47:08.951060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.396 [2024-07-15 21:47:08.954380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.396 [2024-07-15 21:47:08.963656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.396 [2024-07-15 21:47:08.963991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.396 [2024-07-15 21:47:08.964034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.396 [2024-07-15 21:47:08.964066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.396 [2024-07-15 21:47:08.964327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.396 [2024-07-15 21:47:08.964580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.396 [2024-07-15 21:47:08.964597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.396 [2024-07-15 21:47:08.964621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.396 [2024-07-15 21:47:08.967891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.396 [2024-07-15 21:47:08.977534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.396 [2024-07-15 21:47:08.977880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.397 [2024-07-15 21:47:08.977926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.397 [2024-07-15 21:47:08.977942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.397 [2024-07-15 21:47:08.978203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.397 [2024-07-15 21:47:08.978448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.397 [2024-07-15 21:47:08.978465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.397 [2024-07-15 21:47:08.978489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.397 [2024-07-15 21:47:08.981881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.397 [2024-07-15 21:47:08.991373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.397 [2024-07-15 21:47:08.991706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.397 [2024-07-15 21:47:08.991751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.397 [2024-07-15 21:47:08.991773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.397 [2024-07-15 21:47:08.991995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.397 [2024-07-15 21:47:08.992257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.397 [2024-07-15 21:47:08.992278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.397 [2024-07-15 21:47:08.992291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.397 [2024-07-15 21:47:08.995750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.397 [2024-07-15 21:47:09.005221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.397 [2024-07-15 21:47:09.005612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.397 [2024-07-15 21:47:09.005637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.397 [2024-07-15 21:47:09.005652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.397 [2024-07-15 21:47:09.005892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.397 [2024-07-15 21:47:09.006122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.397 [2024-07-15 21:47:09.006150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.397 [2024-07-15 21:47:09.006165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.397 [2024-07-15 21:47:09.009657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.397 [2024-07-15 21:47:09.019132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.397 [2024-07-15 21:47:09.019466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.397 [2024-07-15 21:47:09.019510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.397 [2024-07-15 21:47:09.019526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.397 [2024-07-15 21:47:09.019754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.397 [2024-07-15 21:47:09.019984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.397 [2024-07-15 21:47:09.020003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.397 [2024-07-15 21:47:09.020016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.397 [2024-07-15 21:47:09.023513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.397 [2024-07-15 21:47:09.032982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.397 [2024-07-15 21:47:09.033320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.397 [2024-07-15 21:47:09.033365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.397 [2024-07-15 21:47:09.033380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.397 [2024-07-15 21:47:09.033614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.397 [2024-07-15 21:47:09.033844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.397 [2024-07-15 21:47:09.033869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.397 [2024-07-15 21:47:09.033883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.397 [2024-07-15 21:47:09.037389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.397 [2024-07-15 21:47:09.046848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.397 [2024-07-15 21:47:09.047161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.397 [2024-07-15 21:47:09.047187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.397 [2024-07-15 21:47:09.047203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.397 [2024-07-15 21:47:09.047431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.397 [2024-07-15 21:47:09.047661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.397 [2024-07-15 21:47:09.047680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.397 [2024-07-15 21:47:09.047694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.397 [2024-07-15 21:47:09.051184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.397 [2024-07-15 21:47:09.060654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.397 [2024-07-15 21:47:09.061013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.397 [2024-07-15 21:47:09.061063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.397 [2024-07-15 21:47:09.061078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.397 [2024-07-15 21:47:09.061314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.397 [2024-07-15 21:47:09.061545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.397 [2024-07-15 21:47:09.061564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.397 [2024-07-15 21:47:09.061577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.397 [2024-07-15 21:47:09.065070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.397 [2024-07-15 21:47:09.074429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.397 [2024-07-15 21:47:09.074801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.397 [2024-07-15 21:47:09.074866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.397 [2024-07-15 21:47:09.074882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.397 [2024-07-15 21:47:09.075133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.397 [2024-07-15 21:47:09.075393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.397 [2024-07-15 21:47:09.075412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.397 [2024-07-15 21:47:09.075438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.397 [2024-07-15 21:47:09.078834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.397 [2024-07-15 21:47:09.088109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.397 [2024-07-15 21:47:09.088517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.397 [2024-07-15 21:47:09.088599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.397 [2024-07-15 21:47:09.088614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.397 [2024-07-15 21:47:09.088842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.397 [2024-07-15 21:47:09.089072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.397 [2024-07-15 21:47:09.089089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.397 [2024-07-15 21:47:09.089113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.397 [2024-07-15 21:47:09.092226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.397 [2024-07-15 21:47:09.101529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.397 [2024-07-15 21:47:09.101862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.397 [2024-07-15 21:47:09.101952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.397 [2024-07-15 21:47:09.101967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.397 [2024-07-15 21:47:09.102208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.397 [2024-07-15 21:47:09.102440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.397 [2024-07-15 21:47:09.102456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.397 [2024-07-15 21:47:09.102480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.397 [2024-07-15 21:47:09.105583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.397 [2024-07-15 21:47:09.114901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.397 [2024-07-15 21:47:09.115237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.397 [2024-07-15 21:47:09.115310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.397 [2024-07-15 21:47:09.115341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.397 [2024-07-15 21:47:09.115574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.397 [2024-07-15 21:47:09.115805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.397 [2024-07-15 21:47:09.115822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.397 [2024-07-15 21:47:09.115846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.397 [2024-07-15 21:47:09.118957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.398 [2024-07-15 21:47:09.128256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.398 [2024-07-15 21:47:09.128671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.398 [2024-07-15 21:47:09.128703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.398 [2024-07-15 21:47:09.128730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.398 [2024-07-15 21:47:09.128935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.398 [2024-07-15 21:47:09.129148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.398 [2024-07-15 21:47:09.129165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.398 [2024-07-15 21:47:09.129176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.398 [2024-07-15 21:47:09.132289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.398 [2024-07-15 21:47:09.141572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.398 [2024-07-15 21:47:09.142078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.398 [2024-07-15 21:47:09.142124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.398 [2024-07-15 21:47:09.142159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.398 [2024-07-15 21:47:09.142384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.398 [2024-07-15 21:47:09.142589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.398 [2024-07-15 21:47:09.142605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.398 [2024-07-15 21:47:09.142617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.398 [2024-07-15 21:47:09.145706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.398 [2024-07-15 21:47:09.155020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.398 [2024-07-15 21:47:09.155535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.398 [2024-07-15 21:47:09.155586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.398 [2024-07-15 21:47:09.155601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.398 [2024-07-15 21:47:09.155809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.398 [2024-07-15 21:47:09.156019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.398 [2024-07-15 21:47:09.156035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.398 [2024-07-15 21:47:09.156047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.398 [2024-07-15 21:47:09.159174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.398 [2024-07-15 21:47:09.168635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.398 [2024-07-15 21:47:09.169061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.398 [2024-07-15 21:47:09.169111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.398 [2024-07-15 21:47:09.169128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.398 [2024-07-15 21:47:09.169365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.398 [2024-07-15 21:47:09.169590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.398 [2024-07-15 21:47:09.169609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.398 [2024-07-15 21:47:09.169627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.398 [2024-07-15 21:47:09.173007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.398 [2024-07-15 21:47:09.182384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.398 [2024-07-15 21:47:09.182814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.398 [2024-07-15 21:47:09.182866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.398 [2024-07-15 21:47:09.182883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.398 [2024-07-15 21:47:09.183110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.398 [2024-07-15 21:47:09.183345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.398 [2024-07-15 21:47:09.183364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.398 [2024-07-15 21:47:09.183377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.398 [2024-07-15 21:47:09.186839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.657 [2024-07-15 21:47:09.196105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.657 [2024-07-15 21:47:09.196498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.657 [2024-07-15 21:47:09.196547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.657 [2024-07-15 21:47:09.196564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.657 [2024-07-15 21:47:09.196791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.657 [2024-07-15 21:47:09.197015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.657 [2024-07-15 21:47:09.197033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.657 [2024-07-15 21:47:09.197046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.657 [2024-07-15 21:47:09.200437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.657 [2024-07-15 21:47:09.209698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.657 [2024-07-15 21:47:09.210111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.657 [2024-07-15 21:47:09.210170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.657 [2024-07-15 21:47:09.210198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.657 [2024-07-15 21:47:09.210433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.657 [2024-07-15 21:47:09.210675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.657 [2024-07-15 21:47:09.210693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.657 [2024-07-15 21:47:09.210717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.657 [2024-07-15 21:47:09.213944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.657 [2024-07-15 21:47:09.223179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.657 [2024-07-15 21:47:09.223718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.657 [2024-07-15 21:47:09.223767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.657 [2024-07-15 21:47:09.223794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.657 [2024-07-15 21:47:09.224019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.657 [2024-07-15 21:47:09.224263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.657 [2024-07-15 21:47:09.224281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.657 [2024-07-15 21:47:09.224304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.657 [2024-07-15 21:47:09.227407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.657 [2024-07-15 21:47:09.236570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.657 [2024-07-15 21:47:09.237045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.657 [2024-07-15 21:47:09.237095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.657 [2024-07-15 21:47:09.237110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.657 [2024-07-15 21:47:09.237353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.657 [2024-07-15 21:47:09.237584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.657 [2024-07-15 21:47:09.237601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.657 [2024-07-15 21:47:09.237625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.657 [2024-07-15 21:47:09.240734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.657 [2024-07-15 21:47:09.250044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.657 [2024-07-15 21:47:09.250548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.657 [2024-07-15 21:47:09.250581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.657 [2024-07-15 21:47:09.250597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.657 [2024-07-15 21:47:09.250821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.657 [2024-07-15 21:47:09.251053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.657 [2024-07-15 21:47:09.251070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.657 [2024-07-15 21:47:09.251094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.657 [2024-07-15 21:47:09.254213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.657 [2024-07-15 21:47:09.263528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.657 [2024-07-15 21:47:09.263869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.657 [2024-07-15 21:47:09.263958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.657 [2024-07-15 21:47:09.263973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.657 [2024-07-15 21:47:09.264216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.657 [2024-07-15 21:47:09.264452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.657 [2024-07-15 21:47:09.264470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.658 [2024-07-15 21:47:09.264493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.658 [2024-07-15 21:47:09.267598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.658 [2024-07-15 21:47:09.276911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.658 [2024-07-15 21:47:09.277343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.658 [2024-07-15 21:47:09.277393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.658 [2024-07-15 21:47:09.277407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.658 [2024-07-15 21:47:09.277637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.658 [2024-07-15 21:47:09.277867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.658 [2024-07-15 21:47:09.277884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.658 [2024-07-15 21:47:09.277907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.658 [2024-07-15 21:47:09.281014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.658 [2024-07-15 21:47:09.290326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.658 [2024-07-15 21:47:09.290730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.658 [2024-07-15 21:47:09.290784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.658 [2024-07-15 21:47:09.290798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.658 [2024-07-15 21:47:09.291000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.658 [2024-07-15 21:47:09.291212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.658 [2024-07-15 21:47:09.291230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.658 [2024-07-15 21:47:09.291241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.658 [2024-07-15 21:47:09.294325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.658 [2024-07-15 21:47:09.303797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.658 [2024-07-15 21:47:09.304371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.658 [2024-07-15 21:47:09.304403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.658 [2024-07-15 21:47:09.304418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.658 [2024-07-15 21:47:09.304626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.658 [2024-07-15 21:47:09.304831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.658 [2024-07-15 21:47:09.304847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.658 [2024-07-15 21:47:09.304859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.658 [2024-07-15 21:47:09.307957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.658 [2024-07-15 21:47:09.317258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.658 [2024-07-15 21:47:09.317708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.658 [2024-07-15 21:47:09.317739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.658 [2024-07-15 21:47:09.317754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.658 [2024-07-15 21:47:09.317962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.658 [2024-07-15 21:47:09.318177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.658 [2024-07-15 21:47:09.318195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.658 [2024-07-15 21:47:09.318207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.658 [2024-07-15 21:47:09.321297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.658 [2024-07-15 21:47:09.330581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.658 [2024-07-15 21:47:09.331016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.658 [2024-07-15 21:47:09.331069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.658 [2024-07-15 21:47:09.331083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.658 [2024-07-15 21:47:09.331321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.658 [2024-07-15 21:47:09.331553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.658 [2024-07-15 21:47:09.331570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.658 [2024-07-15 21:47:09.331593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.658 [2024-07-15 21:47:09.334697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.658 [2024-07-15 21:47:09.344010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.658 [2024-07-15 21:47:09.344541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.658 [2024-07-15 21:47:09.344586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.658 [2024-07-15 21:47:09.344601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.658 [2024-07-15 21:47:09.344809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.658 [2024-07-15 21:47:09.345013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.658 [2024-07-15 21:47:09.345030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.658 [2024-07-15 21:47:09.345042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.658 [2024-07-15 21:47:09.348171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.658 [2024-07-15 21:47:09.357498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.658 [2024-07-15 21:47:09.357913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.658 [2024-07-15 21:47:09.357962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.658 [2024-07-15 21:47:09.357980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.658 [2024-07-15 21:47:09.358191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.658 [2024-07-15 21:47:09.358395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.658 [2024-07-15 21:47:09.358412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.658 [2024-07-15 21:47:09.358424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.658 [2024-07-15 21:47:09.361521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.658 [2024-07-15 21:47:09.370814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.658 [2024-07-15 21:47:09.371223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.658 [2024-07-15 21:47:09.371249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.658 [2024-07-15 21:47:09.371264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.658 [2024-07-15 21:47:09.371496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.658 [2024-07-15 21:47:09.371726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.658 [2024-07-15 21:47:09.371743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.658 [2024-07-15 21:47:09.371767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.658 [2024-07-15 21:47:09.375032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.658 [2024-07-15 21:47:09.384335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.658 [2024-07-15 21:47:09.384737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.658 [2024-07-15 21:47:09.384779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.658 [2024-07-15 21:47:09.384794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.658 [2024-07-15 21:47:09.385002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.658 [2024-07-15 21:47:09.385246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.658 [2024-07-15 21:47:09.385264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.658 [2024-07-15 21:47:09.385276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.658 [2024-07-15 21:47:09.388439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.658 [2024-07-15 21:47:09.397822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.658 [2024-07-15 21:47:09.398188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.658 [2024-07-15 21:47:09.398212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.658 [2024-07-15 21:47:09.398226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.658 [2024-07-15 21:47:09.398445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.658 [2024-07-15 21:47:09.398676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.658 [2024-07-15 21:47:09.398697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.658 [2024-07-15 21:47:09.398722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.658 [2024-07-15 21:47:09.401907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.658 [2024-07-15 21:47:09.411341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.658 [2024-07-15 21:47:09.411878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.658 [2024-07-15 21:47:09.411939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.658 [2024-07-15 21:47:09.411956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.658 [2024-07-15 21:47:09.412196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.658 [2024-07-15 21:47:09.412470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.659 [2024-07-15 21:47:09.412489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.659 [2024-07-15 21:47:09.412503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.659 [2024-07-15 21:47:09.416023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.659 [2024-07-15 21:47:09.425029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.659 [2024-07-15 21:47:09.425476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.659 [2024-07-15 21:47:09.425511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.659 [2024-07-15 21:47:09.425528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.659 [2024-07-15 21:47:09.425755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.659 [2024-07-15 21:47:09.425979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.659 [2024-07-15 21:47:09.425997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.659 [2024-07-15 21:47:09.426010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.659 [2024-07-15 21:47:09.429399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.659 [2024-07-15 21:47:09.438667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.659 [2024-07-15 21:47:09.439133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.659 [2024-07-15 21:47:09.439186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.659 [2024-07-15 21:47:09.439212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.659 [2024-07-15 21:47:09.439437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.659 [2024-07-15 21:47:09.439669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.659 [2024-07-15 21:47:09.439686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.659 [2024-07-15 21:47:09.439709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.659 [2024-07-15 21:47:09.442818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.918 [2024-07-15 21:47:09.452382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.918 [2024-07-15 21:47:09.452886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-15 21:47:09.452934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.918 [2024-07-15 21:47:09.452950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.918 [2024-07-15 21:47:09.453212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.918 [2024-07-15 21:47:09.453431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.918 [2024-07-15 21:47:09.453449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.918 [2024-07-15 21:47:09.453473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.918 [2024-07-15 21:47:09.456787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.918 [2024-07-15 21:47:09.466002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.918 [2024-07-15 21:47:09.466623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-15 21:47:09.466676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.918 [2024-07-15 21:47:09.466692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.918 [2024-07-15 21:47:09.466940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.918 [2024-07-15 21:47:09.467219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.918 [2024-07-15 21:47:09.467237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.918 [2024-07-15 21:47:09.467249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.918 [2024-07-15 21:47:09.470436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.918 [2024-07-15 21:47:09.479400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.919 [2024-07-15 21:47:09.479876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-15 21:47:09.479920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.919 [2024-07-15 21:47:09.479935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.919 [2024-07-15 21:47:09.480153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.919 [2024-07-15 21:47:09.480383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.919 [2024-07-15 21:47:09.480401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.919 [2024-07-15 21:47:09.480412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.919 [2024-07-15 21:47:09.483501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.919 [2024-07-15 21:47:09.492798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.919 [2024-07-15 21:47:09.493178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-15 21:47:09.493222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.919 [2024-07-15 21:47:09.493236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.919 [2024-07-15 21:47:09.493455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.919 [2024-07-15 21:47:09.493660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.919 [2024-07-15 21:47:09.493676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.919 [2024-07-15 21:47:09.493688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.919 [2024-07-15 21:47:09.496805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.919 [2024-07-15 21:47:09.506280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.919 [2024-07-15 21:47:09.506709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-15 21:47:09.506755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.919 [2024-07-15 21:47:09.506770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.919 [2024-07-15 21:47:09.506978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.919 [2024-07-15 21:47:09.507194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.919 [2024-07-15 21:47:09.507211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.919 [2024-07-15 21:47:09.507224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.919 [2024-07-15 21:47:09.510337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.919 [2024-07-15 21:47:09.519632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.919 [2024-07-15 21:47:09.520046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-15 21:47:09.520097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.919 [2024-07-15 21:47:09.520111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.919 [2024-07-15 21:47:09.520347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.919 [2024-07-15 21:47:09.520552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.919 [2024-07-15 21:47:09.520569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.919 [2024-07-15 21:47:09.520581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.919 [2024-07-15 21:47:09.523669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.919 [2024-07-15 21:47:09.532948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.919 [2024-07-15 21:47:09.533393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-15 21:47:09.533438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.919 [2024-07-15 21:47:09.533453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.919 [2024-07-15 21:47:09.533689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.919 [2024-07-15 21:47:09.533920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.919 [2024-07-15 21:47:09.533937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.919 [2024-07-15 21:47:09.533968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.919 [2024-07-15 21:47:09.537085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.919 [2024-07-15 21:47:09.546419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.919 [2024-07-15 21:47:09.546895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-15 21:47:09.546940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.919 [2024-07-15 21:47:09.546955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.919 [2024-07-15 21:47:09.547203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.919 [2024-07-15 21:47:09.547434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.919 [2024-07-15 21:47:09.547451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.919 [2024-07-15 21:47:09.547475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.919 [2024-07-15 21:47:09.550584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.919 [2024-07-15 21:47:09.559914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.919 [2024-07-15 21:47:09.560478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-15 21:47:09.560523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.919 [2024-07-15 21:47:09.560538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.919 [2024-07-15 21:47:09.560775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.919 [2024-07-15 21:47:09.561007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.919 [2024-07-15 21:47:09.561024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.919 [2024-07-15 21:47:09.561048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.919 [2024-07-15 21:47:09.564187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.919 [2024-07-15 21:47:09.573343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.919 [2024-07-15 21:47:09.573889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-15 21:47:09.573921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.919 [2024-07-15 21:47:09.573936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.919 [2024-07-15 21:47:09.574183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.919 [2024-07-15 21:47:09.574420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.919 [2024-07-15 21:47:09.574437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.919 [2024-07-15 21:47:09.574461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.919 [2024-07-15 21:47:09.577570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.919 [2024-07-15 21:47:09.586894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.919 [2024-07-15 21:47:09.587399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-15 21:47:09.587451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.919 [2024-07-15 21:47:09.587478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.919 [2024-07-15 21:47:09.587714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.919 [2024-07-15 21:47:09.587949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.919 [2024-07-15 21:47:09.587968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.919 [2024-07-15 21:47:09.587982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.919 [2024-07-15 21:47:09.591245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 434925 Killed "${NVMF_APP[@]}" "$@" 00:25:18.919 21:47:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:18.919 21:47:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:18.919 21:47:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:18.919 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:18.919 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:18.919 [2024-07-15 21:47:09.600683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.919 [2024-07-15 21:47:09.601190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-15 21:47:09.601245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.919 [2024-07-15 21:47:09.601261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.919 [2024-07-15 21:47:09.601519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.919 [2024-07-15 21:47:09.601760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.919 [2024-07-15 21:47:09.601779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.919 [2024-07-15 21:47:09.601792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.919 21:47:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=435662 00:25:18.919 21:47:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:18.919 21:47:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 435662 00:25:18.919 [2024-07-15 21:47:09.605091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.919 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 435662 ']' 00:25:18.919 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.920 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:18.920 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.920 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:18.920 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:18.920 [2024-07-15 21:47:09.614431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.920 [2024-07-15 21:47:09.614827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-15 21:47:09.614853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.920 [2024-07-15 21:47:09.614874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.920 [2024-07-15 21:47:09.615102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.920 [2024-07-15 21:47:09.615345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.920 [2024-07-15 21:47:09.615363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.920 [2024-07-15 21:47:09.615392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.920 [2024-07-15 21:47:09.618795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.920 [2024-07-15 21:47:09.628196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.920 [2024-07-15 21:47:09.628601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-15 21:47:09.628651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.920 [2024-07-15 21:47:09.628667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.920 [2024-07-15 21:47:09.628895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.920 [2024-07-15 21:47:09.629119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.920 [2024-07-15 21:47:09.629146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.920 [2024-07-15 21:47:09.629161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.920 [2024-07-15 21:47:09.632551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.920 [2024-07-15 21:47:09.641772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.920 [2024-07-15 21:47:09.642133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-15 21:47:09.642196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.920 [2024-07-15 21:47:09.642211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.920 [2024-07-15 21:47:09.642452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.920 [2024-07-15 21:47:09.642693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.920 [2024-07-15 21:47:09.642711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.920 [2024-07-15 21:47:09.642735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.920 [2024-07-15 21:47:09.645938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.920 [2024-07-15 21:47:09.651868] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:25:18.920 [2024-07-15 21:47:09.651958] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.920 [2024-07-15 21:47:09.655485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.920 [2024-07-15 21:47:09.655893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-15 21:47:09.655946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.920 [2024-07-15 21:47:09.655969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.920 [2024-07-15 21:47:09.656217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.920 [2024-07-15 21:47:09.656459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.920 [2024-07-15 21:47:09.656477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.920 [2024-07-15 21:47:09.656503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.920 [2024-07-15 21:47:09.659717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.920 [2024-07-15 21:47:09.668927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.920 [2024-07-15 21:47:09.669381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-15 21:47:09.669414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.920 [2024-07-15 21:47:09.669441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.920 [2024-07-15 21:47:09.669669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.920 [2024-07-15 21:47:09.669897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.920 [2024-07-15 21:47:09.669915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.920 [2024-07-15 21:47:09.669928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.920 [2024-07-15 21:47:09.673463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.920 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.920 [2024-07-15 21:47:09.682668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.920 [2024-07-15 21:47:09.682983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-15 21:47:09.683009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.920 [2024-07-15 21:47:09.683024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.920 [2024-07-15 21:47:09.683252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.920 [2024-07-15 21:47:09.683476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.920 [2024-07-15 21:47:09.683494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.920 [2024-07-15 21:47:09.683508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.920 [2024-07-15 21:47:09.686887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.920 [2024-07-15 21:47:09.696440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.920 [2024-07-15 21:47:09.696807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-15 21:47:09.696842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:18.920 [2024-07-15 21:47:09.696856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:18.920 [2024-07-15 21:47:09.697085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:18.920 [2024-07-15 21:47:09.697339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.920 [2024-07-15 21:47:09.697361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.920 [2024-07-15 21:47:09.697386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.920 [2024-07-15 21:47:09.700594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.920 [2024-07-15 21:47:09.706586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:18.920 [2024-07-15 21:47:09.710273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.180 [2024-07-15 21:47:09.710716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.180 [2024-07-15 21:47:09.710759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:19.180 [2024-07-15 21:47:09.710777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:19.180 [2024-07-15 21:47:09.711020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:19.180 [2024-07-15 21:47:09.711274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.180 [2024-07-15 21:47:09.711295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.180 [2024-07-15 21:47:09.711311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.180 [2024-07-15 21:47:09.714780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.180 [2024-07-15 21:47:09.724073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.180 [2024-07-15 21:47:09.724598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.180 [2024-07-15 21:47:09.724630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:19.180 [2024-07-15 21:47:09.724648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:19.180 [2024-07-15 21:47:09.724878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:19.180 [2024-07-15 21:47:09.725105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.180 [2024-07-15 21:47:09.725123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.180 [2024-07-15 21:47:09.725146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.180 [2024-07-15 21:47:09.728527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.180 [2024-07-15 21:47:09.737708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.180 [2024-07-15 21:47:09.738164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.180 [2024-07-15 21:47:09.738206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:19.180 [2024-07-15 21:47:09.738223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:19.180 [2024-07-15 21:47:09.738471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:19.180 [2024-07-15 21:47:09.738715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.180 [2024-07-15 21:47:09.738733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.180 [2024-07-15 21:47:09.738760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.180 [2024-07-15 21:47:09.741977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.180 [2024-07-15 21:47:09.751214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.180 [2024-07-15 21:47:09.751681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.180 [2024-07-15 21:47:09.751723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:19.180 [2024-07-15 21:47:09.751753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:19.180 [2024-07-15 21:47:09.752002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:19.180 [2024-07-15 21:47:09.752265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.180 [2024-07-15 21:47:09.752284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.180 [2024-07-15 21:47:09.752311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.180 [2024-07-15 21:47:09.755515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.180 [2024-07-15 21:47:09.764854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.180 [2024-07-15 21:47:09.765369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.180 [2024-07-15 21:47:09.765422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:19.180 [2024-07-15 21:47:09.765442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:19.180 [2024-07-15 21:47:09.765685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:19.180 [2024-07-15 21:47:09.765931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.181 [2024-07-15 21:47:09.765949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.181 [2024-07-15 21:47:09.765977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.181 [2024-07-15 21:47:09.769193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.181 [2024-07-15 21:47:09.778423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.181 [2024-07-15 21:47:09.778856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.181 [2024-07-15 21:47:09.778900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:19.181 [2024-07-15 21:47:09.778917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:19.181 [2024-07-15 21:47:09.779181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:19.181 [2024-07-15 21:47:09.779426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.181 [2024-07-15 21:47:09.779444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.181 [2024-07-15 21:47:09.779471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.181 [2024-07-15 21:47:09.782675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.181 [2024-07-15 21:47:09.791904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.181 [2024-07-15 21:47:09.792470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.181 [2024-07-15 21:47:09.792512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:19.181 [2024-07-15 21:47:09.792553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:19.181 [2024-07-15 21:47:09.792798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:19.181 [2024-07-15 21:47:09.793043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.181 [2024-07-15 21:47:09.793061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.181 [2024-07-15 21:47:09.793087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.181 [2024-07-15 21:47:09.796303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.181 [2024-07-15 21:47:09.802586] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:19.181 [2024-07-15 21:47:09.802618] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:19.181 [2024-07-15 21:47:09.802644] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:19.181 [2024-07-15 21:47:09.802655] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:19.181 [2024-07-15 21:47:09.802665] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:19.181 [2024-07-15 21:47:09.802718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:19.181 [2024-07-15 21:47:09.802767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:19.181 [2024-07-15 21:47:09.802770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:19.181 [2024-07-15 21:47:09.805750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.181 [2024-07-15 21:47:09.806183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.181 [2024-07-15 21:47:09.806227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:19.181 [2024-07-15 21:47:09.806247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:19.181 [2024-07-15 21:47:09.806486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:19.181 [2024-07-15 21:47:09.806716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.181 [2024-07-15 21:47:09.806735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.181 [2024-07-15 21:47:09.806750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.181 [2024-07-15 21:47:09.810181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.181 [2024-07-15 21:47:09.819464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.181 [2024-07-15 21:47:09.819936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.181 [2024-07-15 21:47:09.819970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:19.181 [2024-07-15 21:47:09.819989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:19.181 [2024-07-15 21:47:09.820234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:19.181 [2024-07-15 21:47:09.820465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.181 [2024-07-15 21:47:09.820484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.181 [2024-07-15 21:47:09.820500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.181 [2024-07-15 21:47:09.823924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.181 [2024-07-15 21:47:09.833196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.181 [2024-07-15 21:47:09.833659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.181 [2024-07-15 21:47:09.833694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:19.181 [2024-07-15 21:47:09.833713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:19.181 [2024-07-15 21:47:09.833947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:19.181 [2024-07-15 21:47:09.834186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.181 [2024-07-15 21:47:09.834206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.181 [2024-07-15 21:47:09.834223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.181 [2024-07-15 21:47:09.837623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.181 [2024-07-15 21:47:09.847061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.181 [2024-07-15 21:47:09.847525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.181 [2024-07-15 21:47:09.847558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:19.181 [2024-07-15 21:47:09.847577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:19.181 [2024-07-15 21:47:09.847809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:19.181 [2024-07-15 21:47:09.848037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.181 [2024-07-15 21:47:09.848056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.181 [2024-07-15 21:47:09.848072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.181 [2024-07-15 21:47:09.851470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.181 [2024-07-15 21:47:09.860935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.181 [2024-07-15 21:47:09.861447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.181 [2024-07-15 21:47:09.861495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:19.181 [2024-07-15 21:47:09.861513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:19.181 [2024-07-15 21:47:09.861799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:19.181 [2024-07-15 21:47:09.862029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.181 [2024-07-15 21:47:09.862047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.181 [2024-07-15 21:47:09.862064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.181 [2024-07-15 21:47:09.865510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.181 [2024-07-15 21:47:09.874731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.181 [2024-07-15 21:47:09.875173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.181 [2024-07-15 21:47:09.875206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:19.181 [2024-07-15 21:47:09.875234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:19.181 [2024-07-15 21:47:09.875464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:19.181 [2024-07-15 21:47:09.875692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.181 [2024-07-15 21:47:09.875710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.181 [2024-07-15 21:47:09.875727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.181 [2024-07-15 21:47:09.879109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.181 [2024-07-15 21:47:09.888488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.181 [2024-07-15 21:47:09.888820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.181 [2024-07-15 21:47:09.888847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:19.181 [2024-07-15 21:47:09.888863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:19.181 [2024-07-15 21:47:09.889087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:19.181 [2024-07-15 21:47:09.889347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.181 [2024-07-15 21:47:09.889366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.181 [2024-07-15 21:47:09.889380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.181 [2024-07-15 21:47:09.892760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.181 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:19.181 [2024-07-15 21:47:09.902131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.181 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:25:19.181 [2024-07-15 21:47:09.902524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.181 [2024-07-15 21:47:09.902561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:19.181 [2024-07-15 21:47:09.902578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:19.181 21:47:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:19.182 [2024-07-15 21:47:09.902807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:19.182 [2024-07-15 21:47:09.903034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.182 [2024-07-15 21:47:09.903053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.182 [2024-07-15 21:47:09.903066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.182 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:19.182 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:19.182 [2024-07-15 21:47:09.906485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.182 [2024-07-15 21:47:09.915877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.182 [2024-07-15 21:47:09.916219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.182 [2024-07-15 21:47:09.916246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:19.182 [2024-07-15 21:47:09.916264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:19.182 [2024-07-15 21:47:09.916495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:19.182 [2024-07-15 21:47:09.916719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.182 [2024-07-15 21:47:09.916737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.182 [2024-07-15 21:47:09.916750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.182 [2024-07-15 21:47:09.920175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.182 21:47:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:19.182 21:47:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:19.182 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.182 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:19.182 [2024-07-15 21:47:09.929693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.182 [2024-07-15 21:47:09.930002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.182 [2024-07-15 21:47:09.930028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:19.182 [2024-07-15 21:47:09.930044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:19.182 [2024-07-15 21:47:09.930281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:19.182 [2024-07-15 21:47:09.930513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.182 [2024-07-15 21:47:09.930532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.182 [2024-07-15 21:47:09.930546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.182 [2024-07-15 21:47:09.934035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.182 [2024-07-15 21:47:09.934127] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:19.182 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.182 21:47:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:19.182 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.182 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:19.182 [2024-07-15 21:47:09.943509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.182 [2024-07-15 21:47:09.943873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.182 [2024-07-15 21:47:09.943909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:19.182 [2024-07-15 21:47:09.943927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:19.182 [2024-07-15 21:47:09.944172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:19.182 [2024-07-15 21:47:09.944406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.182 [2024-07-15 21:47:09.944425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.182 [2024-07-15 21:47:09.944440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.182 [2024-07-15 21:47:09.947921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.182 [2024-07-15 21:47:09.957399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.182 [2024-07-15 21:47:09.957709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.182 [2024-07-15 21:47:09.957735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:19.182 [2024-07-15 21:47:09.957751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:19.182 [2024-07-15 21:47:09.957973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:19.182 [2024-07-15 21:47:09.958208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.182 [2024-07-15 21:47:09.958227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.182 [2024-07-15 21:47:09.958240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.182 [2024-07-15 21:47:09.961626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.182 [2024-07-15 21:47:09.971196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.182 [2024-07-15 21:47:09.971687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.182 [2024-07-15 21:47:09.971732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:19.182 [2024-07-15 21:47:09.971751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:19.182 [2024-07-15 21:47:09.972000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:19.182 [2024-07-15 21:47:09.972257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.182 [2024-07-15 21:47:09.972286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.182 [2024-07-15 21:47:09.972316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.440 Malloc0 00:25:19.440 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.440 21:47:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:19.440 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.440 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:19.440 [2024-07-15 21:47:09.975718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.440 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.440 21:47:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:19.440 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.440 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:19.440 [2024-07-15 21:47:09.985035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.440 [2024-07-15 21:47:09.985377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.440 [2024-07-15 21:47:09.985414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd638d0 with addr=10.0.0.2, port=4420 00:25:19.440 [2024-07-15 21:47:09.985430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd638d0 is same with the state(5) to be set 00:25:19.440 [2024-07-15 21:47:09.985669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd638d0 (9): Bad file descriptor 00:25:19.440 [2024-07-15 21:47:09.985920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:19.440 [2024-07-15 21:47:09.985938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:19.440 [2024-07-15 21:47:09.985971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.440 [2024-07-15 21:47:09.989372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:19.440 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.440 21:47:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:19.440 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.440 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:19.440 [2024-07-15 21:47:09.994027] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.440 21:47:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.440 21:47:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 435149 00:25:19.440 [2024-07-15 21:47:09.998857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.440 [2024-07-15 21:47:10.124181] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:29.407 00:25:29.407 Latency(us) 00:25:29.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.407 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:29.407 Verification LBA range: start 0x0 length 0x4000 00:25:29.408 Nvme1n1 : 15.00 7060.18 27.58 9502.63 0.00 7704.38 594.68 17476.27 00:25:29.408 =================================================================================================================== 00:25:29.408 Total : 7060.18 27.58 9502.63 0.00 7704.38 594.68 17476.27 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:29.408 rmmod nvme_tcp 00:25:29.408 rmmod nvme_fabrics 00:25:29.408 rmmod nvme_keyring 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 435662 ']' 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 435662 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 435662 ']' 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 435662 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 435662 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 435662' 00:25:29.408 killing process with pid 435662 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 435662 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 435662 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:29.408 21:47:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.785 21:47:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:30.785 00:25:30.785 real 0m21.811s 00:25:30.785 user 0m58.961s 00:25:30.785 sys 0m3.901s 00:25:30.785 21:47:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:30.785 21:47:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:30.785 ************************************ 00:25:30.785 END TEST nvmf_bdevperf 00:25:30.785 ************************************ 00:25:31.043 21:47:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:31.043 21:47:21 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:31.043 21:47:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:31.043 21:47:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:31.043 21:47:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:31.043 ************************************ 00:25:31.043 START TEST nvmf_target_disconnect 00:25:31.043 ************************************ 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:31.043 * Looking for test storage... 00:25:31.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:31.043 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:31.044 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.044 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.044 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:31.044 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:31.044 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:31.044 21:47:21 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:31.044 21:47:21 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:31.044 21:47:21 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:31.044 21:47:21 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:25:31.044 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:31.044 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:31.044 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:31.044 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:31.044 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:31.044 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.044 21:47:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:31.044 21:47:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.044 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:31.044 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:31.044 21:47:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:25:31.044 21:47:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:25:32.972 Found 0000:08:00.0 (0x8086 - 0x159b) 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:25:32.972 Found 0000:08:00.1 (0x8086 - 0x159b) 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:25:32.972 Found net devices under 0000:08:00.0: cvl_0_0 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:25:32.972 Found net devices under 0000:08:00.1: cvl_0_1 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:32.972 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:32.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:25:32.973 00:25:32.973 --- 10.0.0.2 ping statistics --- 00:25:32.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.973 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:32.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:25:32.973 00:25:32.973 --- 10.0.0.1 ping statistics --- 00:25:32.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.973 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:32.973 ************************************ 00:25:32.973 START TEST nvmf_target_disconnect_tc1 00:25:32.973 ************************************ 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:32.973 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.973 [2024-07-15 21:47:23.585703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.973 [2024-07-15 21:47:23.585778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x113f0c0 with addr=10.0.0.2, port=4420 00:25:32.973 [2024-07-15 21:47:23.585815] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:32.973 [2024-07-15 21:47:23.585843] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:32.973 [2024-07-15 21:47:23.585858] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:25:32.973 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:25:32.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:32.973 Initializing NVMe Controllers 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:32.973 00:25:32.973 real 0m0.100s 00:25:32.973 user 0m0.045s 00:25:32.973 sys 0m0.054s 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:32.973 ************************************ 00:25:32.973 END TEST nvmf_target_disconnect_tc1 00:25:32.973 ************************************ 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:32.973 ************************************ 00:25:32.973 START TEST nvmf_target_disconnect_tc2 00:25:32.973 ************************************ 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=438086 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 438086 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 438086 ']' 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:32.973 21:47:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:32.973 [2024-07-15 21:47:23.715624] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:25:32.973 [2024-07-15 21:47:23.715718] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:32.973 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.231 [2024-07-15 21:47:23.780425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:33.231 [2024-07-15 21:47:23.898243] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:33.231 [2024-07-15 21:47:23.898299] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:33.231 [2024-07-15 21:47:23.898316] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:33.231 [2024-07-15 21:47:23.898331] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:33.231 [2024-07-15 21:47:23.898344] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:33.231 [2024-07-15 21:47:23.898446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:33.231 [2024-07-15 21:47:23.898584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:33.231 [2024-07-15 21:47:23.898702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:33.231 [2024-07-15 21:47:23.898712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:33.231 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:33.231 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:33.231 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:33.231 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:33.231 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:33.489 Malloc0 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:33.489 [2024-07-15 21:47:24.062450] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:33.489 [2024-07-15 21:47:24.090666] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=438116 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:33.489 21:47:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:33.489 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.397 21:47:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 438086 00:25:35.397 21:47:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:35.397 Read completed with error (sct=0, sc=8) 00:25:35.397 starting I/O failed 00:25:35.397 Read completed with error (sct=0, sc=8) 00:25:35.397 starting I/O failed 00:25:35.397 Read completed with error (sct=0, sc=8) 00:25:35.397 starting I/O failed 00:25:35.397 Read completed with error (sct=0, sc=8) 00:25:35.397 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 [2024-07-15 21:47:26.114957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 [2024-07-15 21:47:26.115323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 [2024-07-15 21:47:26.115622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Write completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.398 starting I/O failed 00:25:35.398 Read completed with error (sct=0, sc=8) 00:25:35.399 starting I/O failed 00:25:35.399 Read completed with error (sct=0, sc=8) 00:25:35.399 starting I/O failed 00:25:35.399 Write completed with error (sct=0, sc=8) 00:25:35.399 starting I/O failed 00:25:35.399 Write completed with error (sct=0, sc=8) 00:25:35.399 starting I/O failed 00:25:35.399 Write completed with error (sct=0, sc=8) 00:25:35.399 starting I/O failed 00:25:35.399 Read completed with error (sct=0, sc=8) 00:25:35.399 starting I/O failed 00:25:35.399 [2024-07-15 21:47:26.115924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:35.399 [2024-07-15 21:47:26.116117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.116157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.116446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.116481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.116629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.116656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.116779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.116806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.116981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.117011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.117114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.117146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.117332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.117368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.117558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.117583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.117724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.117750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.117896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.117963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.118126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.118162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.118338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.118361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.118590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.118614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.118797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.118854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.118996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.119080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.119243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.119281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.119397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.119427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.119627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.119701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.119819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.119844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.119953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.119979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.120075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.120104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.120326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.120393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.120543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.120591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.120676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.120698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.120819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.120873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.120991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.121055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.121156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.121211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.121327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.121349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.121449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.121471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.121628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.121691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.121789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.121812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.121890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.121912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.122066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.122089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.122200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.122253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.122326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.122348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.122420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.122442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.122579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.122630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.122755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.399 [2024-07-15 21:47:26.122790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.399 qpair failed and we were unable to recover it. 00:25:35.399 [2024-07-15 21:47:26.122994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.123029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.123107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.123130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.123241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.123295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.123367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.123389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.123462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.123487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.123560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.123582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.123677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.123699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.123803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.123856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.123930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.123952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.124027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.124049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.124117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.124144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.124221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.124244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.124320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.124344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.124422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.124445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.124514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.124537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.124613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.124640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.124710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.124732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.124815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.124839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.124909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.124932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.125004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.125027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.125124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.125175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.125395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.125417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.125491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.125514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.125580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.125603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.125671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.125693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.125801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.125827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.125895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.125917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.125996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.126021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.126093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.126116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.126198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.126220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.126302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.126324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.126393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.126416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.126483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.126506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.126581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.126605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.126680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.126704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.126792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.126815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.126910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.126941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.127034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.127056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.127135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.127166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.127266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.127319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.127390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.400 [2024-07-15 21:47:26.127412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.400 qpair failed and we were unable to recover it. 00:25:35.400 [2024-07-15 21:47:26.127481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.127502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.127688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.127710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.127779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.127801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.127867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.127889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.127954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.127976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.128045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.128070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.128148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.128172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.128249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.128272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.128343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.128366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.128442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.128464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.128533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.128555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.128643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.128667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.128741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.128764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.128837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.128859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.128930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.128955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.129023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.129046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.129118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.129144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.129223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.129245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.129318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.129339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.129411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.129436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.129627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.129651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.129724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.129747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.129857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.129902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.129982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.130005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.130072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.130098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.130179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.130202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.130278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.130303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.130371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.130393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.130460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.130482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.130584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.130607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.130679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.130703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.130772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.130796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.130985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.131009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.131083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.131106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.131180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.131203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.131316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.131359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.131438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.131461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.131532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.131556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.131635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.131658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.131737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.131759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.131831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.401 [2024-07-15 21:47:26.131853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.401 qpair failed and we were unable to recover it. 00:25:35.401 [2024-07-15 21:47:26.131919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.131941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.132012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.132034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.132219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.132243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.132317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.132340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.132447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.132492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.132595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.132649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.132754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.132810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.132880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.132903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.133006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.133058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.133131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.133165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.133261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.133293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.133386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.133408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.133476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.133498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.133565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.133587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.133659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.133682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.133754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.133779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.133857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.133880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.134069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.134093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.134162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.134185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.134252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.134274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.134350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.134372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.134460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.134523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.134625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.134677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.134744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.134768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.134838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.134861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.134970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.135014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.135083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.135105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.135189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.135214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.135284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.135306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.135377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.135399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.135466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.135489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.135556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.402 [2024-07-15 21:47:26.135578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.402 qpair failed and we were unable to recover it. 00:25:35.402 [2024-07-15 21:47:26.135645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.135667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.135740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.135764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.135837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.135860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.135935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.135960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.136040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.136064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.136145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.136172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.136241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.136264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.136338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.136360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.136428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.136450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.136529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.136554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.136629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.136654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.136722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.136745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.136825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.136848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.136919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.136942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.137009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.137032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.137110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.137132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.137213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.137235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.137309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.137331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.137398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.137427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.137501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.137523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.137594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.137616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.137691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.137714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.137787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.137812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.138001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.138023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.138097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.138119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.138193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.138216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.138286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.138309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.138386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.138410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.138485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.138508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.138579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.138601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.138676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.138701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.138770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.138792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.138863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.138885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.138957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.138979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.139044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.139066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.139135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.139162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.139229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.139251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.139329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.139353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.139424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.139447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.403 [2024-07-15 21:47:26.139514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.403 [2024-07-15 21:47:26.139537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.403 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.139611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.139635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.139708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.139730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.139797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.139819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.139885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.139907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.139979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.140002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.140068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.140100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.140180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.140204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.140274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.140296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.140361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.140383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.140458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.140480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.140550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.140572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.140650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.140674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.140750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.140774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.140843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.140865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.140931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.140953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.141024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.141047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.141114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.141136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.141208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.141230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.141310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.141334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.141417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.141441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.141512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.141534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.141600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.141622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.141697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.141719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.141785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.141807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.141926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.141951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.142024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.142048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.142114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.142136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.142217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.142239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.142308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.142330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.142396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.142418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.142493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.142517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.142588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.142611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.142684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.142713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.142783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.142807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.142878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.142900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.142966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.142988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.143055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.143077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.143147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.143170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.143295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.143319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.143391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.143414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.143513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.404 [2024-07-15 21:47:26.143564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.404 qpair failed and we were unable to recover it. 00:25:35.404 [2024-07-15 21:47:26.143629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.143651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.143716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.143738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.143804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.143826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.143896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.143919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.143987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.144009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.144082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.144104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.144175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.144198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.144275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.144300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.144369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.144392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.144467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.144490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.144560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.144583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.144661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.144684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.144756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.144779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.144970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.144994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.145060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.145083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.145157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.145180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.145249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.145272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.145344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.145367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.145439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.145461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.145527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.145550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.145619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.145641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.145709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.145731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.145800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.145824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.145898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.145922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.145999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.146022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.146147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.146170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.146242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.146265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.146337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.146359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.146426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.146449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.146528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.146552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.146623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.146647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.146719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.146745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.146811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.146833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.146898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.146920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.146988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.147011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.147078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.147100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.147227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.147250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.147326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.147349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.147451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.147494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.147720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.147747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.147834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.405 [2024-07-15 21:47:26.147859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.405 qpair failed and we were unable to recover it. 00:25:35.405 [2024-07-15 21:47:26.147969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.148018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.148111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.148168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.148256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.148282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.148363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.148386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.148462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.148485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.148560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.148584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.148659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.148682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.148751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.148775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.148851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.148873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.148948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.148970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.149035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.149057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.149150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.149181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.149259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.149282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.149356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.149381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.149457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.149481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.149551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.149573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.149649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.149671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.149742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.149768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.149834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.149856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.149933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.149956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.150027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.150049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.150119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.150150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.150233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.150257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.150327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.150355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.150431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.150454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.150521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.150544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.150616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.150638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.150708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.150731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.150805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.150829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.150901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.150925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.150996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.151019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.151098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.151120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.151261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.151297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.151399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.406 [2024-07-15 21:47:26.151446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.406 qpair failed and we were unable to recover it. 00:25:35.406 [2024-07-15 21:47:26.151517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.151539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.151607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.151629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.151695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.151717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.151789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.151811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.151877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.151899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.151967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.151989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.152059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.152082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.152151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.152174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.152241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.152263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.152338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.152363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.152436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.152463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.152652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.152675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.152740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.152762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.152829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.152852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.152930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.152954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.153074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.153097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.153172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.153196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.153271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.153294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.153387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.153435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.153504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.153526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.153619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.153669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.153739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.153761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.153827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.153849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.153919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.153941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.154064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.154087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.154155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.154178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.154250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.154272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.154346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.154369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.154442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.154468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.154542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.154565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.154639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.154664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.154753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.154777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.154856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.154879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.154948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.154971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.155040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.155062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.155194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.155217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.155283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.155306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.155385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.155409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.155482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.155506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.155571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.155594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.155661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.407 [2024-07-15 21:47:26.155683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.407 qpair failed and we were unable to recover it. 00:25:35.407 [2024-07-15 21:47:26.155755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.155778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.155845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.155867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.155941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.155964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.156042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.156065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.156149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.156174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.156248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.156271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.156346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.156368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.156434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.156456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.156523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.156547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.156669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.156691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.156763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.156810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.156919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.156951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.157037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.157059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.157129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.157158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.157224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.157246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.157313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.157335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.157402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.157424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.157503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.157524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.157597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.157620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.157743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.157767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.157834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.157856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.157926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.157948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.158017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.158039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.158108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.158132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.158371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.158404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.158524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.158574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.158642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.158664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.158733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.158755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.158833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.158857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.158926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.158949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.159013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.159035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.159147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.159183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.159272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.159303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.159392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.159414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.159491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.159514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.159588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.159612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.159688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.159714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.159782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.159806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.159875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.159899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.159984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.160030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.160106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.408 [2024-07-15 21:47:26.160128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.408 qpair failed and we were unable to recover it. 00:25:35.408 [2024-07-15 21:47:26.160202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.160224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.160300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.160322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.160391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.160416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.160490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.160514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.160589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.160611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.160687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.160709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.160784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.160807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.160879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.160903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.160969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.160991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.161062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.161085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.161153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.161175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.161249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.161274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.161353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.161375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.161441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.161464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.161532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.161557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.161637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.161659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.161730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.161752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.161825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.161848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.161917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.161939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.162010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.162032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.162098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.162121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.162198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.162220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.162295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.162321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.162388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.162410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.162476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.162498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.162575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.162597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.162666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.162690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.162816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.162842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.162919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.162942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.163022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.163045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.163119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.163145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.163243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.163266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.163333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.163355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.163424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.163448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.163523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.163545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.163621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.163644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.163716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.163739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.163811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.163835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.163909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.163932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.164007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.164029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.164151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.409 [2024-07-15 21:47:26.164174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.409 qpair failed and we were unable to recover it. 00:25:35.409 [2024-07-15 21:47:26.164244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.164266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.164335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.164357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.164422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.164444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.164512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.164536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.164615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.164638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.164708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.164730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.164797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.164819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.164892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.164916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.164986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.165009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.165081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.165105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.165194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.165218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.165292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.165315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.165387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.165409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.165483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.165505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.165583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.165606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.165678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.165700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.165772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.165794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.165866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.165889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.165958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.165980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.166046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.166068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.166159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.166182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.166255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.166277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.166353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.166379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.166453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.166475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.166548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.166570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.166645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.166667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.166732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.166754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.166826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.166850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.166925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.166949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.167019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.167041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.167105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.167126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.167200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.167222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.167288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.167310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.167377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.167399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.167475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.167499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.167573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.167596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.167665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.410 [2024-07-15 21:47:26.167687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.410 qpair failed and we were unable to recover it. 00:25:35.410 [2024-07-15 21:47:26.167753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.167775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.167847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.167870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.167936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.167957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.168025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.168049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.168120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.168146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.168215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.168237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.168309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.168331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.168402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.168425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.168498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.168520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.168585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.168607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.168676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.168698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.168763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.168785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.168862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.168887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.168955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.168977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.169043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.169065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.169170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.169193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.169265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.169288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.169357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.169379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.169451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.169474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.169539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.169561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.169631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.169652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.169724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.169746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.169812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.169833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.169901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.169923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.169990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.170015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.170092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.170114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.170196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.170219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.170285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.170307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.170377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.170399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.170468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.170493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.170567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.170591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.170667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.170689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.170754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.170776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.170850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.170872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.170942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.170964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.171035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.171057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.171131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.171162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.171231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.411 [2024-07-15 21:47:26.171253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.411 qpair failed and we were unable to recover it. 00:25:35.411 [2024-07-15 21:47:26.171327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.171355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.171430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.171453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.171523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.171546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.171610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.171633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.171719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.171751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.171860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.171903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.171969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.171991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.172063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.172086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.172196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.172218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.172288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.172310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.172376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.172398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.172468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.172491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.172566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.172589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.172660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.172682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.172755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.172777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.172850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.172875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.172944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.172966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.173037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.173060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.173129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.173156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.173231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.173253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.173321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.173343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.173487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.173509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.173575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.173597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.173670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.173692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.173758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.173780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.173854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.173879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.173948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.173970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.174039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.174066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.174151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.174174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.174255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.174300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.174368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.174390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.174457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.174479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.174545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.174568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.174639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.174662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.174730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.174753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.174819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.174841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.174912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.174935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.175024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.175069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.175135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.175166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.175233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.175255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.412 [2024-07-15 21:47:26.175335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.412 [2024-07-15 21:47:26.175359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.412 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.175432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.175455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.175530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.175553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.175623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.175645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.175717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.175741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.175807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.175829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.175900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.175923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.175989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.176011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.176085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.176109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.176184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.176208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.176282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.176305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.176375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.176399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.176468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.176490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.176604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.176626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.176695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.176719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.176792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.176816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.176883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.176908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.176988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.177010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.177087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.177112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.177189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.177211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.177294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.177318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.177390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.177412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.177487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.177509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.177586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.177609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.177683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.177706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.177779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.177801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.177871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.177893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.177984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.178032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.178117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.178155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.178273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.178315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.178388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.178411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.178484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.178506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.178576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.178597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.178667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.178689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.178762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.178784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.178878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.178909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.178993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.179015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.179104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.179135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.179232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.179254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.179351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.179373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.413 [2024-07-15 21:47:26.179439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.413 [2024-07-15 21:47:26.179461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.413 qpair failed and we were unable to recover it. 00:25:35.414 [2024-07-15 21:47:26.179626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.414 [2024-07-15 21:47:26.179690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.414 qpair failed and we were unable to recover it. 00:25:35.414 [2024-07-15 21:47:26.179789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.414 [2024-07-15 21:47:26.179824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.414 qpair failed and we were unable to recover it. 00:25:35.414 [2024-07-15 21:47:26.179932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.414 [2024-07-15 21:47:26.179974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.414 qpair failed and we were unable to recover it. 00:25:35.414 [2024-07-15 21:47:26.180051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.414 [2024-07-15 21:47:26.180074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.414 qpair failed and we were unable to recover it. 00:25:35.414 [2024-07-15 21:47:26.180154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.414 [2024-07-15 21:47:26.180177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.414 qpair failed and we were unable to recover it. 00:25:35.414 [2024-07-15 21:47:26.180254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.414 [2024-07-15 21:47:26.180277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.414 qpair failed and we were unable to recover it. 00:25:35.414 [2024-07-15 21:47:26.180343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.414 [2024-07-15 21:47:26.180365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.414 qpair failed and we were unable to recover it. 00:25:35.414 [2024-07-15 21:47:26.180447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.414 [2024-07-15 21:47:26.180469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.414 qpair failed and we were unable to recover it. 00:25:35.414 [2024-07-15 21:47:26.180540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.414 [2024-07-15 21:47:26.180562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.414 qpair failed and we were unable to recover it. 00:25:35.414 [2024-07-15 21:47:26.180640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.414 [2024-07-15 21:47:26.180663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.414 qpair failed and we were unable to recover it. 00:25:35.414 [2024-07-15 21:47:26.180732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.414 [2024-07-15 21:47:26.180754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.414 qpair failed and we were unable to recover it. 00:25:35.414 [2024-07-15 21:47:26.180829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.414 [2024-07-15 21:47:26.180851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.414 qpair failed and we were unable to recover it. 00:25:35.414 [2024-07-15 21:47:26.180918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.414 [2024-07-15 21:47:26.180940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.414 qpair failed and we were unable to recover it. 00:25:35.414 [2024-07-15 21:47:26.181009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.414 [2024-07-15 21:47:26.181035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.414 qpair failed and we were unable to recover it. 00:25:35.414 [2024-07-15 21:47:26.181126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.414 [2024-07-15 21:47:26.181177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.414 qpair failed and we were unable to recover it. 00:25:35.414 [2024-07-15 21:47:26.181247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.414 [2024-07-15 21:47:26.181269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.414 qpair failed and we were unable to recover it. 00:25:35.414 [2024-07-15 21:47:26.181335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.414 [2024-07-15 21:47:26.181357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.414 qpair failed and we were unable to recover it. 00:25:35.414 [2024-07-15 21:47:26.181430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.414 [2024-07-15 21:47:26.181453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.414 qpair failed and we were unable to recover it. 00:25:35.414 [2024-07-15 21:47:26.181523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.414 [2024-07-15 21:47:26.181545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.414 qpair failed and we were unable to recover it. 00:25:35.414 [2024-07-15 21:47:26.181616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.414 [2024-07-15 21:47:26.181639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.414 qpair failed and we were unable to recover it. 00:25:35.414 [2024-07-15 21:47:26.181709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.414 [2024-07-15 21:47:26.181731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.414 qpair failed and we were unable to recover it. 00:25:35.414 [2024-07-15 21:47:26.181805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.414 [2024-07-15 21:47:26.181827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.414 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.181897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.181919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.181983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.182005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.182089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.182111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.182188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.182212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.182289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.182313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.182391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.182414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.182490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.182513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.182594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.182616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.182687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.182709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.182781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.182804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.182897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.182937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.183016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.183039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.183104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.183126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.183210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.183233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.183301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.183323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.183394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.183416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.183484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.183506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.183574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.183596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.183666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.183688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.183762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.183784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.183853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.183876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.183943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.183965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.184063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.184096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.184182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.184207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.184286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.184311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.184382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.698 [2024-07-15 21:47:26.184406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.698 qpair failed and we were unable to recover it. 00:25:35.698 [2024-07-15 21:47:26.184473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.184495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.184563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.184585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.184654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.184676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.184747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.184770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.184843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.184865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.184935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.184958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.185036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.185061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.185134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.185162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.185231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.185254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.185325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.185347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.185427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.185450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.185519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.185541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.185614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.185638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.185711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.185734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.185809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.185832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.185897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.185920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.185986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.186008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.186075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.186097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.186177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.186202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.186274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.186296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.186368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.186390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.186461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.186484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.186559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.186581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.186648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.186670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.186741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.186762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.186828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.186850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.186921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.186944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.187015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.187039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.187106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.187128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.187237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.187273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.187376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.187413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.187505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.187542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.187637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.187666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.187778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.187800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.187896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.187944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.188072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.188096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.188189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.188223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.188322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.188351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.188434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.188457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.699 qpair failed and we were unable to recover it. 00:25:35.699 [2024-07-15 21:47:26.188529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.699 [2024-07-15 21:47:26.188550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.188620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.188642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.188715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.188740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.188810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.188833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.188906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.188929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.189003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.189027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.189094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.189118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.189213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.189249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.189360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.189406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.189508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.189560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.189650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.189692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.189758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.189780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.189848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.189871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.189937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.189958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.190026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.190047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.190164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.190187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.190266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.190288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.190355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.190376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.190447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.190469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.190570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.190591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.190660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.190682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.190759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.190781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.190852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.190873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.190941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.190964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.191033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.191055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.191124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.191150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.191232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.191255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.191328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.191351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.191422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.191444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.191519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.191541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.191617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.191643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.191712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.191734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.191814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.191837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.191905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.191927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.191999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.192022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.192095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.192118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.192193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.192215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.192289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.192311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.192376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.192399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.192468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.192491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.700 [2024-07-15 21:47:26.192578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.700 [2024-07-15 21:47:26.192619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.700 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.192687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.192708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.192793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.192823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.192910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.192933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.193012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.193037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.193112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.193136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.193211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.193234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.193299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.193325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.193411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.193471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.193562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.193590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.193675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.193699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.193766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.193787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.193858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.193879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.193953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.193975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.194045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.194069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.194150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.194174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.194266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.194296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.194391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.194420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.194511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.194565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.194683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.194707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.194775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.194799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.194889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.194918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.195028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.195057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.195136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.195163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.195229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.195250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.195316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.195338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.195408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.195430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.195506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.195528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.195597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.195619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.195686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.195707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.195774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.195797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.195861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.195883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.195953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.195977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.196052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.196075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.196161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.196200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.196297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.196351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.196453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.196498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.196615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.196667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.196782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.196806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.196892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.196924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.197008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.197031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.701 [2024-07-15 21:47:26.197099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.701 [2024-07-15 21:47:26.197120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.701 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.197230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.197269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.197350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.197379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.197460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.197482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.197548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.197571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.197638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.197660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.197733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.197764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.197833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.197856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.197937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.197962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.198039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.198061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.198126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.198154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.198220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.198242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.198320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.198348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.198449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.198478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.198565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.198587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.198665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.198689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.198760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.198781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.198855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.198877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.198944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.198967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.199032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.199054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.199160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.199183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.199248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.199270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.199342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.199366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.199440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.199463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.199536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.199559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.199627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.199649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.199713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.199735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.199807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.199831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.199910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.199932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.200011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.200036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.200115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.200146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.200239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.200267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.200376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.200425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.200530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.702 [2024-07-15 21:47:26.200588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.702 qpair failed and we were unable to recover it. 00:25:35.702 [2024-07-15 21:47:26.200693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.200741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.200863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.200886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.200953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.200975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.201044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.201067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.201136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.201164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.201233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.201255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.201336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.201364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.201452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.201477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.201553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.201575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.201652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.201676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.201743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.201765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.201835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.201858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.201923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.201945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.202022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.202044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.202117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.202144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.202210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.202232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.202302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.202324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.202391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.202413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.202477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.202499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.202569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.202591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.202656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.202678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.202749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.202771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.202838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.202860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.202931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.202953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.203024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.203047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.203112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.203134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.203218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.203244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.203312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.203334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.203401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.203423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.203491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.203513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.203579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.203601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.203681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.203706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.203776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.203799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.203865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.203887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.203958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.203981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.204053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.204077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.204152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.204174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.204249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.204287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.204379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.204403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.204490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.204517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.204619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.703 [2024-07-15 21:47:26.204647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.703 qpair failed and we were unable to recover it. 00:25:35.703 [2024-07-15 21:47:26.204734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.204757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.204825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.204847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.204914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.204936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.205006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.205028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.205100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.205123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.205201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.205224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.205293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.205316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.205399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.205422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.205494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.205519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.205591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.205613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.205680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.205703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.205769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.205791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.205859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.205883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.205955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.205977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.206047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.206069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.206147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.206170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.206239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.206261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.206335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.206357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.206426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.206448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.206514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.206535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.206607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.206631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.206701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.206725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.206794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.206816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.206886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.206908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.206973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.206995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.207081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.207106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.207203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.207228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.207296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.207318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.207387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.207409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.207475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.207497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.207568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.207592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.207664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.207686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.207752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.207776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.207845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.207869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.207941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.207963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.208035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.208058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.208130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.208163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.208259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.208297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.208379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.208407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.208501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.208525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.208597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.704 [2024-07-15 21:47:26.208620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.704 qpair failed and we were unable to recover it. 00:25:35.704 [2024-07-15 21:47:26.208691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.208713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.208786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.208809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.208893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.208919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.209010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.209033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.209102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.209125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.209199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.209222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.209296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.209318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.209392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.209414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.209485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.209506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.209572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.209596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.209677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.209701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.209779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.209806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.209885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.209908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.209984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.210006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.210078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.210101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.210178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.210202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.210283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.210307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.210379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.210401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.210477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.210500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.210575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.210598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.210668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.210691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.210767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.210789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.210861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.210885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.210968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.210998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.211080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.211107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.211190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.211214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.211287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.211309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.211381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.211404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.211478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.211500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.211586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.211614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.211719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.211745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.211853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.211882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.211971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.211994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.212072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.212097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.212198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.212227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.212319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.212347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.212439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.212487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.212603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.212674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.212782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.212846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.212969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.212993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.213082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.213109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.705 qpair failed and we were unable to recover it. 00:25:35.705 [2024-07-15 21:47:26.213195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.705 [2024-07-15 21:47:26.213218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.213292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.213314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.213381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.213403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.213474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.213496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.213564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.213586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.213652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.213674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.213743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.213765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.213837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.213860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.213932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.213957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.214032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.214055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.214121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.214148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.214219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.214241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.214313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.214335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.214407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.214429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.214501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.214524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.214591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.214614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.214692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.214717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.214786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.214810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.215001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.215026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.215099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.215122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.215204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.215227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.215294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.215317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.215404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.215431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.215520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.215577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.215697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.215721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.215854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.215909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.215998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.216059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.216124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.216160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.216230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.216252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.216323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.216346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.216447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.216500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.216589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.216616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.216711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.216737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.216842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.216870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.216955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.216982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.217064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.706 [2024-07-15 21:47:26.217091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.706 qpair failed and we were unable to recover it. 00:25:35.706 [2024-07-15 21:47:26.217190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.217213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.217281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.217309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.217391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.217415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.217500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.217528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.217643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.217670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.217752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.217779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.217865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.217892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.217971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.218024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.218143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.218167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.218239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.218262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.218335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.218357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.218446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.218472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.218552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.218575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.218651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.218676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.218752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.218775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.218849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.218873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.218960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.218987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.219085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.219113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.219207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.219236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.219401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.219455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.219541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.219568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.219648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.219671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.219765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.219821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.219905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.219931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.220013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.220036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.220109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.220130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.220206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.220228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.220298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.220321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.220392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.220420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.220491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.220516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.220592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.220616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.220688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.220713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.220784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.220807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.220889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.220915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.221104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.221127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.221229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.221257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.221353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.221380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.221468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.221490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.221576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.221603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.707 qpair failed and we were unable to recover it. 00:25:35.707 [2024-07-15 21:47:26.221685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.707 [2024-07-15 21:47:26.221709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.221779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.221802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.221876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.221898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.221974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.221997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.222066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.222090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.222157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.222180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.222268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.222294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.222374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.222396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.222469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.222490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.222560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.222582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.222658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.222682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.222753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.222778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.222853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.222876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.222952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.222975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.223043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.223065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.223143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.223167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.223240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.223265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.223337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.223359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.223431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.223454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.223529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.223551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.223620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.223642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.223730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.223756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.223836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.223858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.223948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.224002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.224223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.224248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.224320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.224343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.224415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.224438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.224509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.224532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.224607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.224631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.224700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.224727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.224812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.224863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.224976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.225029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.225128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.225178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.225289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.225331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.225444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.225471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.225557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.225581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.225671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.225699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.225800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.225827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.225912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.225935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.226002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.226024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.226104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.226132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.708 qpair failed and we were unable to recover it. 00:25:35.708 [2024-07-15 21:47:26.226331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.708 [2024-07-15 21:47:26.226358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.226439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.226465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.226549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.226576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.226652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.226677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.226754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.226815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.226933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.226956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.227042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.227067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.227154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.227178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.227252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.227275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.227344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.227366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.227431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.227453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.227526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.227548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.227622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.227647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.227725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.227750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.227832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.227857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.227933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.227959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.228032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.228054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.228122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.228151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.228222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.228243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.228312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.228334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.228407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.228429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.228510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.228535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.228612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.228634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.228704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.228729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.228807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.228831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.228910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.228935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.229012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.229034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.229111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.229135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.229235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.229286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.229393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.229445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.229552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.229576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.229666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.229692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.229794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.229819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.229919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.229944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.230034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.230059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.230159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.230207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.230461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.230526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.230636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.230662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.230739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.230765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.230852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.230879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.709 [2024-07-15 21:47:26.230971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.709 [2024-07-15 21:47:26.230994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.709 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.231087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.231152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.231238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.231269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.231374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.231400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.231478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.231500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.231566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.231588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.231658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.231683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.231752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.231776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.231855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.231880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.231969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.232014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.232145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.232169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.232299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.232337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.232420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.232480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.232578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.232634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.232752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.232798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.232901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.232951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.233089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.233153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.233260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.233286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.233429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.233454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.233526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.233548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.233640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.233668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.233752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.233775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.233845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.233867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.233950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.233976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.234072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.234094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.234164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.234187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.234252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.234274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.234352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.234377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.234450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.234472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.234541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.234566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.234644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.234666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.234734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.234756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.234827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.234849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.234925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.234948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.235016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.235038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.235112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.235134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.235283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.235306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.235392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.235418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.235514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.235540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.235635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.235659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.235734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.710 [2024-07-15 21:47:26.235758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.710 qpair failed and we were unable to recover it. 00:25:35.710 [2024-07-15 21:47:26.235834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.235858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.235934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.235960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.236032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.236055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.236120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.236150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.236240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.236267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.236349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.236376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.236459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.236488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.236578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.236605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.236697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.236722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.236793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.236816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.236887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.236911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.236998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.237023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.237109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.237132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.237212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.237234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.237304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.237326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.237398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.237420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.237497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.237519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.237593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.237615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.237680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.237702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.237776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.237800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.237881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.237906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.237981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.238003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.238075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.238098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.238189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.238249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.238352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.238413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.238535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.238559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.238643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.238669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.238775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.238802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.238880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.238906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.238973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.238996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.239096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.239159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.239231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.239253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.239322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.239346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.711 [2024-07-15 21:47:26.239424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.711 [2024-07-15 21:47:26.239449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.711 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.239524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.239548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.239624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.239648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.239717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.239740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.239808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.239831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.239916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.239964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.240078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.240104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.240206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.240244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.240321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.240375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.240503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.240526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.240597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.240619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.240692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.240715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.240817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.240870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.240960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.241014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.241135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.241166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.241250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.241276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.241360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.241382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.241464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.241489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.241587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.241645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.241787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.241841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.241918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.241941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.242012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.242035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.242102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.242128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.242224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.242252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.242337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.242380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.242505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.242529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.242602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.242626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.242698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.242723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.242791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.242814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.242885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.242908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.242982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.243005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.243071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.243093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.243161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.243185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.243257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.243282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.243372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.243399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.243495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.243520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.243618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.243643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.243733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.243757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.243832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.243855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.243923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.243946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.244022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.244044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.712 qpair failed and we were unable to recover it. 00:25:35.712 [2024-07-15 21:47:26.244134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.712 [2024-07-15 21:47:26.244166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.244268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.244295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.244380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.244402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.244475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.244497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.244562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.244584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.244651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.244673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.244742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.244764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.244831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.244852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.244926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.244951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.245024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.245047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.245121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.245152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.245224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.245247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.245321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.245346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.245413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.245436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.245521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.245547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.245774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.245836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.245953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.245980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.246070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.246094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.246185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.246212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.246310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.246336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.246423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.246446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.246528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.246582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.246699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.246724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.246795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.246818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.246885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.246908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.246997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.247023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.247120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.247154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.247249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.247273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.247362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.247387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.247489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.247516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.247614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.247639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.247745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.247770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.247867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.247892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.247972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.247995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.248077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.248105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.248217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.248242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.248315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.248338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.248422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.248448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.248528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.248550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.248635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.248663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.248760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.248818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.713 [2024-07-15 21:47:26.248943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.713 [2024-07-15 21:47:26.248994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.713 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.249067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.249090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.249163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.249186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.249258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.249280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.249350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.249372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.249446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.249468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.249535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.249556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.249631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.249656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.249723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.249746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.249813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.249835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.249902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.249924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.249997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.250018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.250085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.250107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.250184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.250208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.250280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.250303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.250378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.250402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.250470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.250494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.250566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.250591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.250664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.250687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.250753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.250776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.250845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.250867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.250944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.250967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.251036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.251059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.251128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.251162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.251239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.251261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.251331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.251353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.251427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.251452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.251525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.251547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.251636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.251669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.251745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.251771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.251855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.251881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.251964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.251990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.252079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.252103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.252184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.252207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.252286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.252313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.252398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.252420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.252504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.252530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.252620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.252642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.252711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.252733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.252803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.252825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.252903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.252928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.714 [2024-07-15 21:47:26.252999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.714 [2024-07-15 21:47:26.253021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.714 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.253092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.253115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.253193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.253215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.253306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.253332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.253439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.253488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.253609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.253633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.253718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.253768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.253894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.253920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.254005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.254032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.254127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.254154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.254355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.254378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.254452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.254475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.254546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.254568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.254647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.254673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.254772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.254797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.254906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.254931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.255022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.255050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.255145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.255172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.255258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.255284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.255371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.255392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.255480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.255506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.255586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.255608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.255674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.255696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.255761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.255783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.255857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.255879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.255947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.255969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.256044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.256069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.256145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.256168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.256243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.256266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.256339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.256362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.256439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.256464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.256540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.256564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.256652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.256679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.256768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.256794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.256883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.715 [2024-07-15 21:47:26.256908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.715 qpair failed and we were unable to recover it. 00:25:35.715 [2024-07-15 21:47:26.257005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.257031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.257135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.257170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.257305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.257331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.257432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.257456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.257536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.257559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.257631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.257654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.257724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.257746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.257817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.257839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.257916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.257940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.258007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.258029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.258105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.258129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.258211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.258242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.258344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.258369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.258450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.258497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.258612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.258668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.258766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.258831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.258953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.258977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.259049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.259071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.259149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.259173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.259245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.259267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.259332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.259354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.259424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.259446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.259531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.259559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.259651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.259676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.259752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.259778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.259852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.259875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.259951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.259973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.260039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.260061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.260128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.260155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.260225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.260247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.260314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.260335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.260408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.260430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.260501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.260525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.260594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.260621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.260700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.260725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.260794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.260817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.260891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.260913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.260979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.261001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.261075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.261102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.261190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.261215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.261301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.716 [2024-07-15 21:47:26.261326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.716 qpair failed and we were unable to recover it. 00:25:35.716 [2024-07-15 21:47:26.261411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.261433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.261500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.261523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.261588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.261610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.261680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.261703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.261774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.261798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.261879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.261904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.261975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.261998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.262063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.262085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.262156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.262180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.262255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.262279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.262362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.262388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.262470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.262497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.262585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.262638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.262745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.262769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.262836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.262858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.262938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.262964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.263049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.263072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.263145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.263169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.263243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.263266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.263361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.263388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.263463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.263490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.263571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.263599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.263685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.263711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.263796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.263834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.263909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.263938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.264013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.264037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.264134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.264205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.264285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.264333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.264449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.264474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.264568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.264623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.264712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.264737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.264840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.264864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.264951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.264975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.265045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.265067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.265152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.265176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.265248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.265270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.265339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.265361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.265434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.265456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.265531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.265554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.265628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.265652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.265723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.717 [2024-07-15 21:47:26.265745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.717 qpair failed and we were unable to recover it. 00:25:35.717 [2024-07-15 21:47:26.265814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.265836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.265905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.265927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.265998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.266020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.266088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.266110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.266181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.266204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.266282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.266305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.266375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.266398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.266584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.266608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.266684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.266709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.266800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.266825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.266901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.266925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.267008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.267033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.267167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.267204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.267271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.267294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.267363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.267385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.267452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.267477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.267552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.267577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.267655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.267678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.267746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.267768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.267840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.267863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.267936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.267959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.268079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.268102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.268177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.268200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.268277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.268303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.268384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.268408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.268480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.268502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.268577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.268599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.268721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.268779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.268889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.268917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.268995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.269020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.269096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.269121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.269209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.269244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.269317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.269342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.269416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.269440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.269533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.269571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.269660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.269699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.269783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.269840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.269955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.270012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.270080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.270102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.270191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.270216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.270311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.718 [2024-07-15 21:47:26.270335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.718 qpair failed and we were unable to recover it. 00:25:35.718 [2024-07-15 21:47:26.270423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.270446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.270566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.270588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.270659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.270683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.270753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.270775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.270894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.270973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.271044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.271066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.271136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.271170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.271257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.271282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.271381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.271405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.271507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.271535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.271644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.271672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.271762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.271839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.271955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.272026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.272136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.272169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.272309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.272357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.272485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.272509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.272590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.272615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.272714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.272739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.272826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.272878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.272989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.273044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.273148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.273175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.273398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.273460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.273586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.273622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.273713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.273781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.273882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.273949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.274057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.274107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.274226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.274292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.274398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.274424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.274500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.274553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.274662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.274723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.274843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.274867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.274951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.274976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.275078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.275107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.275249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.275301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.275418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.719 [2024-07-15 21:47:26.275472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.719 qpair failed and we were unable to recover it. 00:25:35.719 [2024-07-15 21:47:26.275579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.275604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.275686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.275744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.275867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.275892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.275961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.275985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.276079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.276135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.276264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.276311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.276555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.276615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.276741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.276804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.276910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.276976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.277082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.277159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.277262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.277331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.277439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.277494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.277601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.277644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.277778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.277801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.277891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.277945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.278132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.278191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.278281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.278306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.278393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.278416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.278505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.278531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.278630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.278656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.278737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.278760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.278883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.278918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.279006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.279031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.279128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.279166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.279269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.279314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.279431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.279459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.279554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.279579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.279648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.279671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.279763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.279791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.279877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.279904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.280033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.280056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.280128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.280158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.280254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.280306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.280400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.280427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.280508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.280530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.280622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.280694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.280837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.280898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.281013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.281037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.281106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.281128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.281216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.281243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.281340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.720 [2024-07-15 21:47:26.281366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.720 qpair failed and we were unable to recover it. 00:25:35.720 [2024-07-15 21:47:26.281467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.281503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.281608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.281656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.281772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.281845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.281949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.282014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.282151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.282210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.282313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.282339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.282421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.282484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.282602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.282626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.282698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.282720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.282813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.282870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.282941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.282963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.283030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.283052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.283127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.283154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.283237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.283263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.283351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.283373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.283444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.283466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.283535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.283560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.283636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.283661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.283737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.283762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.283829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.283852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.283948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.284001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.284082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.284108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.284216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.284239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.284310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.284332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.284402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.284425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.284529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.284553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.284624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.284646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.284720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.284749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.284824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.284847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.284920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.284944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.285019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.285043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.285111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.285133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.285252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.285277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.285376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.285398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.285464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.285486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.285554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.285576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.285659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.285685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.285771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.285795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.285865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.285887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.285957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.285980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.286051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.721 [2024-07-15 21:47:26.286075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.721 qpair failed and we were unable to recover it. 00:25:35.721 [2024-07-15 21:47:26.286154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.286177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.286246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.286269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.286339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.286362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.286427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.286450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.286527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.286550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.286621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.286646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.286722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.286747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.286818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.286841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.286912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.286934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.287000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.287022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.287094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.287118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.287201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.287225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.287297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.287320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.287509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.287534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.287608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.287631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.287698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.287722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.287789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.287811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.287893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.287938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.288059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.288120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.288231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.288298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.288403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.288429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.288563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.288614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.288717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.288784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.288882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.288936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.289044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.289092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.289230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.289255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.289346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.289377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.289458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.289480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.289550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.289572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.289642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.289667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.289738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.289760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.289833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.289855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.289927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.289950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.290020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.290042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.290115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.290145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.290216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.290240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.290307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.290330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.290396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.290418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.290488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.290510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.290586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.290609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.290721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.722 [2024-07-15 21:47:26.290747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.722 qpair failed and we were unable to recover it. 00:25:35.722 [2024-07-15 21:47:26.290825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.290851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.290925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.290951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.291037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.291061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.291156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.291181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.291266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.291290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.291366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.291388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.291472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.291497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.291586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.291610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.291684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.291708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.291782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.291807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.291874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.291896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.291963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.291986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.292055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.292080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.292159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.292183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.292254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.292276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.292344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.292366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.292436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.292460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.292531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.292553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.292626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.292650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.292730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.292753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.292825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.292847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.292917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.292939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.293003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.293025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.293101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.293126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.293211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.293235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.293424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.293448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.293539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.293565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.293757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.293785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.293873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.293899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.293991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.294014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.294086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.294108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.294209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.294236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.294329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.294354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.294454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.294479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.294576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.294601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.294690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.294712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.294779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.294801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.294870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.294892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.294964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.294987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.295056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.295082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.295148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.295171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.723 qpair failed and we were unable to recover it. 00:25:35.723 [2024-07-15 21:47:26.295234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.723 [2024-07-15 21:47:26.295257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.295333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.295355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.295431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.295457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.295534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.295558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.295632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.295654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.295728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.295765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.295847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.295882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.295973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.296000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.296094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.296118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.296228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.296252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.296336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.296362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.296447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.296470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.296564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.296590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.296677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.296700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.296775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.296799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.296881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.296906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.297010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.297036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.297172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.297197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.297269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.297292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.297358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.297380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.297450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.297472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.297538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.297561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.297628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.297650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.297751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.297773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.297845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.297867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.297943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.297968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.298044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.298066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.298160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.298189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.298395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.298465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.298643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.298702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.298776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.298800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.298881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.298946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.299038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.299065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.299159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.299182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.299255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.299277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.299355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.724 [2024-07-15 21:47:26.299381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.724 qpair failed and we were unable to recover it. 00:25:35.724 [2024-07-15 21:47:26.299481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.299506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.299593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.299617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.299690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.299715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.299796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.299821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.299888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.299911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.299991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.300014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.300083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.300105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.300203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.300227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.300298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.300323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.300390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.300413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.300488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.300510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.300577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.300599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.300672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.300697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.300770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.300794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.300865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.300888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.300955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.300978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.301058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.301084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.301166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.301190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.301275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.301300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.301400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.301425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.301520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.301545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.301643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.301668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.301759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.301781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.301853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.301875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.301996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.302021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.302097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.302122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.302211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.302236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.302307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.302330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.302401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.302423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.302498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.302525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.302620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.302656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.302743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.302781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.302871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.302898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.302986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.303012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.303093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.303165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.303262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.303328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.303449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.303473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.303560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.303586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.303684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.303709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.303789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.303811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.725 [2024-07-15 21:47:26.303896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.725 [2024-07-15 21:47:26.303922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.725 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.304002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.304024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.304099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.304123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.304209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.304234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.304311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.304334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.304412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.304436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.304509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.304532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.304605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.304631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.304717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.304744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.304946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.304974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.305058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.305114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.305229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.305282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.305397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.305458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.305569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.305635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.305742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.305809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.305943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.306003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.306105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.306178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.306286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.306313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.306403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.306453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.306553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.306600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.306717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.306741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.306819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.306842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.306908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.306930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.307015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.307041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.307128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.307163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.307238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.307262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.307338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.307362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.307434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.307457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.307532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.307554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.307625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.307648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.307725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.307747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.307813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.307836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.307904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.307927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.308002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.308026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.308100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.308124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.308218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.308246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.308331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.308357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.308440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.308467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.308545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.308571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.308648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.308674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.308751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.308777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.726 [2024-07-15 21:47:26.308855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.726 [2024-07-15 21:47:26.308881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.726 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.308969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.308996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.309084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.309108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.309196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.309223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.309328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.309357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.309447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.309497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.309621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.309670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.309770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.309796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.309870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.309924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.310030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.310053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.310129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.310165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.310237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.310259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.310327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.310349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.310473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.310510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.310589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.310614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.310700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.310728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.310794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.310816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.310940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.310964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.311032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.311054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.311129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.311161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.311236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.311259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.311330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.311353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.311449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.311502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.311569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.311591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.311658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.311680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.311751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.311775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.311850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.311875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.311943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.311965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.312033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.312055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.312124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.312153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.312222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.312244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.312311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.312333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.312452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.312475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.312544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.312567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.312640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.312665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.312736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.312759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.312829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.312852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.312920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.312942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.313061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.313083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.313151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.313198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.313326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.313350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.727 qpair failed and we were unable to recover it. 00:25:35.727 [2024-07-15 21:47:26.313423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.727 [2024-07-15 21:47:26.313446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.313545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.313600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.313711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.313762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.313880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.313904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.313976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.313999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.314085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.314153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.314279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.314302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.314394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.314434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.314504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.314526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.314607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.314633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.314732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.314758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.314844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.314869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.315015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.315051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.315123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.315159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.315253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.315280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.315382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.315408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.315505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.315530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.315614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.315638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.315728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.315757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.315835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.315900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.316011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.316085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.316197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.316223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.316307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.316376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.316481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.316548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.316665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.316688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.316761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.316784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.316873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.316901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.316987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.317010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.317096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.317148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.317272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.317319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.317421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.317468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.317571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.317626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.317744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.317769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.317841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.317863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.317935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.728 [2024-07-15 21:47:26.317956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.728 qpair failed and we were unable to recover it. 00:25:35.728 [2024-07-15 21:47:26.318034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.318057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.318123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.318149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.318224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.318245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.318315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.318337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.318404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.318427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.318493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.318514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.318586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.318612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.318680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.318703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.318771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.318794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.318866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.318891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.318970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.318993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.319068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.319093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.319169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.319192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.319279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.319304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.319405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.319430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.319516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.319538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.319602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.319624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.319695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.319717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.319787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.319812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.319889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.319913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.319986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.320010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.320088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.320110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.320193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.320216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.320295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.320318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.320391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.320414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.320485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.320512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.320588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.320611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.320686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.320710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.320777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.320800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.320878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.320903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.320976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.320999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.321087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.321113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.321219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.321244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.321344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.321373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.321456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.321478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.321552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.321574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.321648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.321673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.321794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.321817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.321891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.321914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.322072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.322129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.322255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.729 [2024-07-15 21:47:26.322280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.729 qpair failed and we were unable to recover it. 00:25:35.729 [2024-07-15 21:47:26.322348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.322371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.322459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.322485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.322584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.322609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.322743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.322766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.322837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.322859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.322945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.322970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.323060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.323083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.323157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.323182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.323274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.323303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.323386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.323412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.323506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.323528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.323615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.323651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.323722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.323760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.323847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.323907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.324012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.324041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.324151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.324180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.324271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.324296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.324389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.324417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.324557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.324593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.324673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.324697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.324770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.324793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.324868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.324893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.324971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.324993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.325065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.325088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.325184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.325210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.325412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.325447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.325532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.325557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.325640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.325662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.325730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.325752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.325824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.325847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.325925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.325948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.326130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.326165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.326237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.326259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.326334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.326356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.326430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.326453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.326527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.326550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.326622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.326647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.326723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.326746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.326814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.326836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.326909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.326933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.327000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.730 [2024-07-15 21:47:26.327023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.730 qpair failed and we were unable to recover it. 00:25:35.730 [2024-07-15 21:47:26.327098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.327123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.327209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.327232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.327356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.327409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.327512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.327539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.327622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.327644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.327715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.327738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.327810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.327833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.327900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.327922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.327993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.328015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.328143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a4170 is same with the state(5) to be set 00:25:35.731 [2024-07-15 21:47:26.328241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.328272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.328372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.328399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.328484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.328509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.328598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.328625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.328700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.328725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.328807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.328834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.328914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.328973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.329082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.329108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.329190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.329217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.329309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.329334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.329432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.329461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.329558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.329584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.329681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.329706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.329805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.329830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.329928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.329953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.330034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.330056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.330120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.330150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.330224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.330246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.330430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.330452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.330525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.330549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.330619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.330641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.330706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.330728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.330796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.330822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.330890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.330912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.331095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.331117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.331201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.331227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.331301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.331325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.331399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.331425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.331517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.331547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.331625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.331652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.331734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.331760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.731 [2024-07-15 21:47:26.331856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.731 [2024-07-15 21:47:26.331879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.731 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.331968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.331994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.332075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.332097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.332184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.332211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.332305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.332330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.332419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.332443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.332536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.332565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.332647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.332674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.332749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.332775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.332852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.332878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.332960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.333019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.333134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.333165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.333238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.333261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.333333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.333356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.333426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.333449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.333519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.333542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.333609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.333632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.333698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.333720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.333792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.333814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.333879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.333901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.333976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.334001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.334077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.334100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.334172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.334195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.334275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.334301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.334388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.334414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.334490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.334516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.334597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.334623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.334700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.334725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.334803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.334840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.334928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.334954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.335030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.335065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.335155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.335199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.335327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.335374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.335479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.335536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.335634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.732 [2024-07-15 21:47:26.335660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.732 qpair failed and we were unable to recover it. 00:25:35.732 [2024-07-15 21:47:26.335791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.335830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.335974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.336013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.336099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.336125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.336228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.336254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.336381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.336418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.336497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.336523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.336623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.336673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.336775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.336830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.336931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.336957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.337040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.337063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.337149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.337174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.337263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.337288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.337371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.337393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.337459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.337482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.337567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.337594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.337682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.337707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.337775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.337798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.337868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.337891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.338012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.338034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.338105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.338128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.338231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.338254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.338320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.338342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.338408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.338431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.338504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.338530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.338598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.338620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.338693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.338716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.338837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.338859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.338933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.338957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.339039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.339081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.339208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.339235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.339313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.339370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.339484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.339539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.339649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.339675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.339752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.339803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.339913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.339956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.340066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.340091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.340194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.340219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.340312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.340367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.340460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.340498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.340564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.340587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.340674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.733 [2024-07-15 21:47:26.340702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.733 qpair failed and we were unable to recover it. 00:25:35.733 [2024-07-15 21:47:26.340784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.340806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.340894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.340919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.341001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.341024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.341103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.341128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.341226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.341248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.341374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.341398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.341474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.341497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.341572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.341596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.341666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.341689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.341760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.341783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.341849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.341871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.341939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.341962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.342029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.342051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.342120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.342148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.342240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.342266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.342349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.342373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.342443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.342466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.342545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.342573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.342648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.342675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.342763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.342790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.342877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.342904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.342989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.343016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.343113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.343146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.343227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.343251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.343331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.343357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.343437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.343459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.343528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.343552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.343642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.343680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.343748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.343771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.343837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.343858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.343931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.343953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.344019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.344042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.344114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.344145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.344240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.344277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.344353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.344378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.344450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.344472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.344542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.344565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.344655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.344680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.344771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.344795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.344869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.344892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.344965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.734 [2024-07-15 21:47:26.344988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.734 qpair failed and we were unable to recover it. 00:25:35.734 [2024-07-15 21:47:26.345055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.345078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.345149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.345173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.345246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.345268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.345336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.345358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.345436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.345459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.345526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.345548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.345731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.345753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.345827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.345848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.345914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.345940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.346009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.346032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.346098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.346121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.346200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.346224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.346302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.346327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.346409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.346432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.346503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.346525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.346594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.346616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.346687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.346709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.346776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.346800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.346875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.346898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.346964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.346986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.347054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.347076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.347162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.347186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.347375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.347411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.347499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.347527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.347610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.347636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.347715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.347761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.347869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.347895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.347973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.348028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.348127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.348165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.348267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.348291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.348385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.348446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.348548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.348592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.348719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.348774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.348864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.348889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.348993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.349018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.349101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.349127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.349233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.349262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.349357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.349383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.349464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.349487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.349556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.349578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.349651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.735 [2024-07-15 21:47:26.349674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.735 qpair failed and we were unable to recover it. 00:25:35.735 [2024-07-15 21:47:26.349743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.349765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.349838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.349861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.349929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.349951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.350026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.350051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.350119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.350151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.350229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.350252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.350319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.350341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.350436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.350461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.350547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.350569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.350635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.350658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.350735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.350759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.350830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.350854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.350930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.350955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.351028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.351051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.351121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.351154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.351240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.351266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.351339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.351364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.351436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.351461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.351533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.351557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.351640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.351664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.351756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.351781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.351920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.351959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.352035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.352058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.352123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.352152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.352225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.352246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.352323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.352346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.352413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.352436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.352507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.352530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.352600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.352624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.352699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.352721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.352790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.352814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.353000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.353022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.353094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.353117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.353215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.353240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.353335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.353360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.353448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.353470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.353551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.353609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.353678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.353701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.353766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.736 [2024-07-15 21:47:26.353788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.736 qpair failed and we were unable to recover it. 00:25:35.736 [2024-07-15 21:47:26.353970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.353992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.354060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.354083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.354152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.354177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.354247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.354272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.354342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.354365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.354439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.354462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.354553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.354579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.354655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.354691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.354784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.354820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.354908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.354962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.355059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.355111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.355225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.355250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.355327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.355382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.355485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.355546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.355647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.355673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.355749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.355784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.355875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.355920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.356024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.356049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.356151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.356175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.356258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.356281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.356364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.356388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.356473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.356495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.356580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.356608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.356697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.356719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.356790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.356813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.356878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.356900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.356973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.356995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.357066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.357089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.357171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.357197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.357275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.357299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.357381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.357406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.357497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.357519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.357585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.357607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.357682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.357705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.357772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.357795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.357862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.357885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.357957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.737 [2024-07-15 21:47:26.357979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.737 qpair failed and we were unable to recover it. 00:25:35.737 [2024-07-15 21:47:26.358047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.358071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.358165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.358195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.358266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.358289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.358362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.358386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.358472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.358497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.358589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.358615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.358692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.358716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.358783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.358806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.358874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.358896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.358966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.358988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.359063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.359086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.359169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.359195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.359266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.359292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.359394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.359417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.359482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.359504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.359572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.359596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.359672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.359696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.359884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.359909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.359978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.360001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.360067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.360090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.360160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.360182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.360254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.360277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.360341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.360363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.360453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.360479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.360566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.360591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.360664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.360687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.360765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.360787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.360854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.360877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.360949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.360973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.361041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.361063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.361146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.361172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.361240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.361262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.361335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.361359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.361426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.361449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.361524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.361549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.361645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.361670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.361756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.361779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.361860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.361885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.361970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.361992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.362189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.362227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.362296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.362319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.362384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.362406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.738 [2024-07-15 21:47:26.362479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.738 [2024-07-15 21:47:26.362502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.738 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.362569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.362591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.362661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.362685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.362763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.362787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.362888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.362912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.362989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.363013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.363080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.363103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.363183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.363207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.363278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.363300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.363390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.363417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.363494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.363519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.363601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.363627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.363705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.363732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.363805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.363830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.363902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.363927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.364013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.364037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.364115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.364148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.364256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.364281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.364363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.364386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.364471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.364495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.364596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.364623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.364720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.364747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.364830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.364856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.364928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.364953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.365034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.365059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.365177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.365225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.365315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.365342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.365429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.365466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.365552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.365578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.365657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.365681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.365762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.365794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.365897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.365935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.366027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.366052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.366145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.366172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.366250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.366273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.366340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.366364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.366457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.366489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.366583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.366620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.366713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.366739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.366821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.366846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.366925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.366949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.367028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.367053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.367195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.367233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.367335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.367369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.367455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.367481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.739 qpair failed and we were unable to recover it. 00:25:35.739 [2024-07-15 21:47:26.367564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.739 [2024-07-15 21:47:26.367587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.367674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.367699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.367797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.367822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.367919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.367952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.368057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.368089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.368192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.368220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.368307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.368332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.368408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.368432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.368501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.368524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.368598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.368630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.368731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.368763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.368859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.368884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.368963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.368993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.369077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.369102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.369197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.369223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.369297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.369356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.369472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.369523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.369643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.369665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.369814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.369846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.369940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.369967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.370052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.370113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.370230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.370290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.370391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.370417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.370503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.370544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.370643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.370669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.370738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.370760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.370827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.370850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.370940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.370964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.371048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.371072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.371177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.371218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.371310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.371335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.371418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.371454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.371543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.371568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.371656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.371679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.371751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.371775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.371842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.371864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.371936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.371959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.372041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.372071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.372154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.372193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.372279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.372304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.372401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.372429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.372509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.372535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.372619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.372673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.740 [2024-07-15 21:47:26.372775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.740 [2024-07-15 21:47:26.372811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.740 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.372902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.741 [2024-07-15 21:47:26.372937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.741 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.373028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.741 [2024-07-15 21:47:26.373095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.741 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.373209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.741 [2024-07-15 21:47:26.373252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.741 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.373370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.741 [2024-07-15 21:47:26.373431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.741 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.373564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.741 [2024-07-15 21:47:26.373635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.741 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.373744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.741 [2024-07-15 21:47:26.373789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.741 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.373916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.741 [2024-07-15 21:47:26.373965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.741 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.374071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.741 [2024-07-15 21:47:26.374124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.741 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.374232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.741 [2024-07-15 21:47:26.374273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.741 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.374419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.741 [2024-07-15 21:47:26.374457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.741 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.374550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.741 [2024-07-15 21:47:26.374578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.741 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.374660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.741 [2024-07-15 21:47:26.374685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.741 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.374754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.741 [2024-07-15 21:47:26.374777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.741 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.374844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.741 [2024-07-15 21:47:26.374866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.741 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.374933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.741 [2024-07-15 21:47:26.374955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.741 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.375042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.741 [2024-07-15 21:47:26.375100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.741 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.375213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.741 [2024-07-15 21:47:26.375249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.741 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.375339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.741 [2024-07-15 21:47:26.375399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.741 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.375508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.741 [2024-07-15 21:47:26.375543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.741 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.375637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.741 [2024-07-15 21:47:26.375659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.741 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.375733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.741 [2024-07-15 21:47:26.375757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.741 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.375875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.741 [2024-07-15 21:47:26.375928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.741 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.376027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.741 [2024-07-15 21:47:26.376056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.741 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.376149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.741 [2024-07-15 21:47:26.376192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.741 qpair failed and we were unable to recover it. 00:25:35.741 [2024-07-15 21:47:26.376308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.376366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.376483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.376535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.376646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.376682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.376770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.376806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.376893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.376947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.377062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.377122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.377237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.377293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.377390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.377448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.377545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.377588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.377719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.377743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.377851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.377887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.378007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.378071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.378163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.378198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.378293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.378336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.378456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.378515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.378618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.378664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.378767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.378822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.378946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.378996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.379092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.379160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.379267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.379323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.379419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.379462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.379565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.379617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.379731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.379765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.379890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.379948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.380039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.380071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.380164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.380234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.380332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.380375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.380503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.380526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.380609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.380667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.380779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.380836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.380937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.380996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.381105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.381178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.742 qpair failed and we were unable to recover it. 00:25:35.742 [2024-07-15 21:47:26.381289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.742 [2024-07-15 21:47:26.381330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.381445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.381497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.381624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.381660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.381750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.381782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.381871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.381903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.381993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.382042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.382166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.382193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.382276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.382303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.382387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.382427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.382548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.382604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.382704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.382749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.382865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.382922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.383030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.383085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.383214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.383259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.383392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.383427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.383529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.383599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.383710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.383773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.383878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.383930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.384044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.384102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.384217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.384275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.384380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.384446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.384545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.384592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.384709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.384735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.384814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.384864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.384987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.385009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.385103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.385183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.385295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.385324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.385398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.385423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.385504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.385528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.385609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.385633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.385716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.385741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.385837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.385866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.385960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.385987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.386067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.386091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.386168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.386201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.386291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.386323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.386413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.386452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.386528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.386552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.743 [2024-07-15 21:47:26.386637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.743 [2024-07-15 21:47:26.386698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.743 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.386814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.386840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.386928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.386990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.387106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.387131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.387223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.387246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.387403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.387426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.387517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.387540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.387639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.387663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.387748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.387769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.387835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.387856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.387922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.387943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.388062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.388084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.388159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.388181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.388265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.388287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.388372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.388395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.388497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.388525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.388605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.388629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.388709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.388743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.388836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.388872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.388948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.388971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.389047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.389072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.389155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.389180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.389250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.389272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.389339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.389360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.389441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.389463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.389548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.389577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.389654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.389677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.389755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.389777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.389865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.389888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.389974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.390000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.390089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.390111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.390209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.390233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.390314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.390336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.390405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.390427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.390519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.390546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.390652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.390677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.390759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.390781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.390864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.390888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.390989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.391013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.391102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.391123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.391211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.744 [2024-07-15 21:47:26.391232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.744 qpair failed and we were unable to recover it. 00:25:35.744 [2024-07-15 21:47:26.391324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.391363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.391473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.391499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.391574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.391596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.391664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.391686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.391753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.391775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.391850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.391881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.391966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.391988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.392067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.392090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.392175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.392201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.392279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.392302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.392372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.392394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.392468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.392490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.392562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.392584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.392653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.392674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.392748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.392776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.392857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.392883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.392959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.392981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.393054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.393076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.393150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.393173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.393252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.393273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.393351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.393374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.393451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.393482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.393571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.393596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.393673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.393697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.393770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.393792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.393881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.393906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.393992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.394014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.394089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.394111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.394195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.394217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.394299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.394321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.394397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.394418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.394487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.394509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.394582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.394604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.394695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.394719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.745 qpair failed and we were unable to recover it. 00:25:35.745 [2024-07-15 21:47:26.394805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.745 [2024-07-15 21:47:26.394831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.394907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.394929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.394997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.395019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.395105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.395128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.395215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.395238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.395315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.395338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.395531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.395565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.395657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.395685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.395789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.395812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.395897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.395919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.396103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.396144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.396218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.396239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.396326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.396349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.396450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.396473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.396575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.396598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.396697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.396720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.396803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.396823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.396917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.396942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.397038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.397060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.397132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.397174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.397263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.397286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.397375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.397398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.397490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.397517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.397611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.397633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.397819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.397854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.397925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.397946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.398036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.398059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.398148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.746 [2024-07-15 21:47:26.398170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.746 qpair failed and we were unable to recover it. 00:25:35.746 [2024-07-15 21:47:26.398243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.398264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.398332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.398353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.398422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.398443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.398509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.398530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.398598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.398619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.398684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.398705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.398894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.398920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.398995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.399018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.399093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.399120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.399204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.399225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.399299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.399320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.399399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.399421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.399498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.399520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.399615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.399639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.399715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.399742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.399816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.399838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.399928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.399951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.400039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.400071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.400159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.400197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.400278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.400303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.400385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.400451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.400556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.400604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.400718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.400743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.400827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.400868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.400980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.401029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.401152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.401175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.401260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.401282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.401355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.401377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.401445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.401467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.401537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.401559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.401652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.401690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.401780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.401817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.401904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.401928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.402022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.402046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.402119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.402147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.402225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.402248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.402330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.402352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.402458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.402479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.402558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.402611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.747 qpair failed and we were unable to recover it. 00:25:35.747 [2024-07-15 21:47:26.402716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.747 [2024-07-15 21:47:26.402752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.402838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.402861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.402936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.402962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.403186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.403226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.403311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.403334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.403431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.403454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.403536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.403570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.403657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.403695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.403786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.403808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.403877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.403898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.403983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.404014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.404127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.404164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.404272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.404320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.404428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.404450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.404525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.404579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.404683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.404753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.404859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.404883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.404953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.404974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.405189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.405212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.405287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.405309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.405375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.405397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.405479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.405501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.405574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.405596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.405678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.405703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.405778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.405811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.405896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.405919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.405992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.406015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.406085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.406108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.406187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.406208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.406276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.406298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.406373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.406394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.406463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.406484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.406551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.406572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.406643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.406664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.406736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.406761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.406833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.406857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.406927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.406948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.407026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.407051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.407119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.407147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.407222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.407245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.407336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.407360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.748 [2024-07-15 21:47:26.407434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.748 [2024-07-15 21:47:26.407456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.748 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.407528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.407550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.407618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.407640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.407705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.407757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.407851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.407873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.407944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.407966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.408038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.408061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.408149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.408174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.408242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.408264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.408341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.408364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.408430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.408452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.408527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.408552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.408626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.408649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.408715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.408736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.408812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.408835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.408956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.408978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.409051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.409076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.409148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.409172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.409239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.409262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.409335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.409357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.409422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.409447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.409516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.409538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.409722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.409744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.409811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.409832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.409907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.409929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.410002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.410024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.410093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.410114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.410198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.410224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.410321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.410350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.410437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.410465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.410543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.410566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.410685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.410707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.410776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.410799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.410869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.410891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.410971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.410993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.411060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.411081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.411148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.411171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.411246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.411267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.411345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.411366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.411431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.411452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.411517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.411539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.749 [2024-07-15 21:47:26.411727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.749 [2024-07-15 21:47:26.411748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.749 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.411822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.411846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.411917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.411939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.412011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.412034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.412101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.412123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.412208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.412231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.412418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.412452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.412633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.412656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.412727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.412748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.412826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.412847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.412917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.412939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.413015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.413036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.413111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.413132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.413227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.413251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.413326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.413352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.413428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.413452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.413524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.413546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.413611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.413665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.413772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.413797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.413880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.413903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.413975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.413997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.414068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.414089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.414165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.414191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.414376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.414399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.414471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.414492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.414559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.414580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.414648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.414669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.414735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.414756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.414827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.414848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.414918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.414940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.415009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.415032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.415105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.415130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.415224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.415247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.415314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.415339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.750 qpair failed and we were unable to recover it. 00:25:35.750 [2024-07-15 21:47:26.415421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.750 [2024-07-15 21:47:26.415443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.415514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.415537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.415616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.415641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.415721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.415744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.415819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.415840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.415914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.415936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.416119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.416147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.416224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.416247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.416325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.416347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.416417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.416439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.416510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.416532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.416612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.416635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.416711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.416733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.416805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.416826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.416900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.416920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.416995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.417017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.417094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.417116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.417193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.417215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.417288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.417310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.417397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.417425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.417512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.417536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.417613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.417636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.417757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.417779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.417901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.417924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.417992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.418014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.418092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.418116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.418206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.418236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.418310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.418333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.418410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.418431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.418502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.418523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.418595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.418617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.418737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.418758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.418830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.418852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.418923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.418944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.419023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.419046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.419119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.419146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.419218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.419239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.419331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.419352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.419430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.419452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.419522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.419543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.419623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.419648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.751 qpair failed and we were unable to recover it. 00:25:35.751 [2024-07-15 21:47:26.419727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.751 [2024-07-15 21:47:26.419751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.419828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.419851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.419924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.419945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.420025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.420047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.420118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.420145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.420236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.420258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.420332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.420354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.420427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.420452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.420530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.420552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.420630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.420657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.420742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.420764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.420836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.420859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.420953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.420975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.421054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.421075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.421157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.421179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.421255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.421277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.421348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.421370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.421446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.421470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.421541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.421566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.421643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.421665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.421739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.421761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.421834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.421857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.421927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.421948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.422015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.422037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.422119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.422150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.422233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.422259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.422326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.422347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.422421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.422452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.422544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.422569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.422653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.422677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.422758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.422780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.422857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.422879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.422946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.422968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.423049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.423073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.423155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.423181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.423255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.423276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.423345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.423367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.423441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.423462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.423535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.423556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.423631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.423654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.423728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.752 [2024-07-15 21:47:26.423754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.752 qpair failed and we were unable to recover it. 00:25:35.752 [2024-07-15 21:47:26.423827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.423849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.423926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.423949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.424027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.424058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.424149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.424174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.424243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.424266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.424347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.424372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.424475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.424499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.424572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.424596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.424683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.424709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.424777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.424799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.424873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.424897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.424975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.425001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.425073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.425098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.425180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.425203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.425271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.425291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.425369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.425391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.425464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.425488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.425574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.425597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.425674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.425697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.425772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.425794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.425869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.425893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.425969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.425994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.426066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.426088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.426157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.426189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.426275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.426297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.426380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.426402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.426470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.426491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.426558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.426579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.426653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.426674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.426756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.426780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.426850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.426874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.426955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.426981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.427056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.427079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.427148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.427171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.427251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.427272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.427347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.427370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.427440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.427461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.427533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.427554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.427623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.427647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.427721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.427742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.427810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.427831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.753 qpair failed and we were unable to recover it. 00:25:35.753 [2024-07-15 21:47:26.427897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.753 [2024-07-15 21:47:26.427918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.427998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.428021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.428094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.428117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.428215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.428242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.428321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.428343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.428418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.428440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.428517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.428538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.428609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.428634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.428726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.428757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.428836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.428860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.428933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.428955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.429033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.429056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.429129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.429157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.429232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.429255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.429324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.429346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.429422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.429444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.429536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.429558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.429637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.429664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.429739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.429761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.429831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.429853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.429930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.429952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.430033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.430054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.430129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.430156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.430233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.430254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.430335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.430357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.430430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.430452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.430524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.430547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.430621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.430643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.430716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.430743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.430830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.430854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.430922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.430944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.431015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.431036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.431107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.431131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.431220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.431246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.431317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.431339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.754 qpair failed and we were unable to recover it. 00:25:35.754 [2024-07-15 21:47:26.431413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.754 [2024-07-15 21:47:26.431435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.431500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.431522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.431592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.431618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.431695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.431718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.431797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.431820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.431899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.431922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.431994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.432017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.432094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.432117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.432196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.432219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.432299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.432322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.432395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.432417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.432483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.432504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.432584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.432606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.432681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.432702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.432771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.432794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.432873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.432899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.432979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.433003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.433077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.433099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.433183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.433207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.433302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.433336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.433420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.433445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.433519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.433543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.433618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.433642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.433710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.433731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.433797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.433819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.433956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.433982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.434063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.434086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.434168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.434192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.434265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.434294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.434375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.434400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.434477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.434504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.434577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.434599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.434677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.434698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.434772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.434796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.434865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.434889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.434966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.434989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.435055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.435076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.435156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.435178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.435252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.435275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.755 [2024-07-15 21:47:26.435350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.755 [2024-07-15 21:47:26.435371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.755 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.435452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.435475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.435544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.435565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.435639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.435660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.435741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.435764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.435840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.435863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.435934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.435956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.436034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.436056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.436124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.436152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.436226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.436251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.436347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.436369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.436449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.436470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.436546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.436569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.436644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.436667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.436738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.436760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.436948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.436970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.437051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.437072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.437157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.437179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.437248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.437269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.437343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.437364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.437436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.437457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.437531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.437552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.437619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.437640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.437713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.437744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.437831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.437855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.437930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.437952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.438022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.438044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.438123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.438150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.438229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.438253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.438329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.438353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.438424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.438449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.438522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.438544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.438615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.438636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.438705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.438727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.438800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.438821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.438890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.438912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.438984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.439006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.439192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.439215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.439286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.439307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.439382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.439404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.439476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.439498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.439572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.756 [2024-07-15 21:47:26.439595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.756 qpair failed and we were unable to recover it. 00:25:35.756 [2024-07-15 21:47:26.439669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.439692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.439765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.439788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.439867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.439891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.439966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.439987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.440063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.440086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.440167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.440190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.440260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.440282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.440356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.440379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.440451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.440473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.440545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.440568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.440648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.440676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.440752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.440774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.440859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.440884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.440960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.441014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.441128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.441163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.441237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.441264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.441344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.441391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.441496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.441547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.441651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.441693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.441800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.441822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.441898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.441921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.441996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.442019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.442098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.442119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.442202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.442224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.442302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.442324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.442393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.442414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.442599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.442621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.442690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.442711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.442788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.442810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.442887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.442909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.442983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.443005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.443081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.443104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.443187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.443210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.443277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.443299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.443485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.443507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.443583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.443605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.443683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.443705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.443772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.443793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.443866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.443887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.443964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.443988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.444060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.757 [2024-07-15 21:47:26.444085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.757 qpair failed and we were unable to recover it. 00:25:35.757 [2024-07-15 21:47:26.444169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.444201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.444291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.444318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.444401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.444424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.444499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.444522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.444599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.444623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.444694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.444717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.444794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.444817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.444889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.444912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.444988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.445009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.445079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.445100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.445180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.445202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.445272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.445293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.445366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.445388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.445463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.445485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.445558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.445579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.445666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.445698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.445789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.445813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.445889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.445912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.445980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.446003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.446077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.446100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.446185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.446209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.446294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.446317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.446384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.446405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.446481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.446503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.446583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.446606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.446688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.446714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.446793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.446817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.446892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.446914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.446984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.447010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.447079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.447101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.447189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.447213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.447290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.447312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.447382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.447403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.447472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.447494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.447566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.447588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.447659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.447681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.447753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.447776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.447854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.447876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.447949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.447971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.448038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.448059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.448132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.448162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.758 qpair failed and we were unable to recover it. 00:25:35.758 [2024-07-15 21:47:26.448241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.758 [2024-07-15 21:47:26.448266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.448357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.448382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.448461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.448484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.448554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.448575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.448647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.448668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.448735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.448758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.448849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.448872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.448937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.448959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.449026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.449049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.449115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.449144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.449237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.449262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.449337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.449365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.449440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.449462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.449537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.449558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.449627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.449651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.449732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.449756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.449827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.449850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.449922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.449944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.450016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.450039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.450113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.450136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.450216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.450238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.450317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.450339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.450410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.450433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.450508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.450530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.450598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.450620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.450687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.450710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.450775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.450797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.450872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.450898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.450971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.450992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.451062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.451084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.451155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.451179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.451245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.451267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.451333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.451354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.451427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.451450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.759 [2024-07-15 21:47:26.451533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.759 [2024-07-15 21:47:26.451558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.759 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.451631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.451653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.451730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.451761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.451839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.451861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.451938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.451961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.452035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.452058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.452129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.452157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.452229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.452252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.452373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.452397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.452464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.452485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.452604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.452626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.452704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.452727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.452803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.452828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.452905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.452927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.453005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.453026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.453096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.453117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.453194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.453225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.453311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.453335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.453409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.453434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.453511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.453535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.453606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.453629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.453703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.453725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.453792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.453813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.453887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.453908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.453985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.454006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.454073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.454094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.454171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.454193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.454267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.454289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.454472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.454494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.454562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.454583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.454653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.454674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.454752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.454777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.454855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.454877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.454951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.454975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.455057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.455104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.455208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.455253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.455362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.455387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.455455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.455477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.455555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.455577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.455644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.455666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.455739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.455762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.760 [2024-07-15 21:47:26.455829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.760 [2024-07-15 21:47:26.455851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.760 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.455928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.455952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.456027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.456049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.456128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.456161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.456233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.456257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.456339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.456362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.456442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.456466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.456538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.456567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.456643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.456668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.456739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.456798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.456895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.456919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.456994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.457017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.457087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.457110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.457310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.457351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.457421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.457442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.457517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.457540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.457607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.457629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.457701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.457722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.457797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.457819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.457888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.457909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.458102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.458124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.458199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.458221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.458295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.458319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.458395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.458418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.458487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.458509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.458578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.458601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.458681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.458704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.458780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.458801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.458883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.458904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.458973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.458994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.459067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.459091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.459183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.459205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.459281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.459305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.459382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.459405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.459477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.459499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.459573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.459595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.459669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.459691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.459761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.459782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.459854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.459876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.459950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.459975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.460052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.460077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.460159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.761 [2024-07-15 21:47:26.460197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.761 qpair failed and we were unable to recover it. 00:25:35.761 [2024-07-15 21:47:26.460296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.460320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.460397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.460419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.460491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.460512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.460585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.460607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.460681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.460702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.460781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.460802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.460877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.460898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.460966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.460988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.461074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.461102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.461193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.461226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.461319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.461344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.461424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.461447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.461523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.461549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.461628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.461652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.461728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.461751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.461834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.461861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.461946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.461969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.462044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.462071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.462168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.462224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.462332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.462379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.462476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.462499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.462574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.462596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.462676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.462700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.462779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.462801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.462887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.462918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.462994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.463016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.463098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.463121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.463224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.463248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.463324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.463345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.463472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.463511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.463612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.463634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.463707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.463749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.463860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.463902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.463999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.464022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.464093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.464115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.464200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.464223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.464298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.464321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.464393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.464415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.464486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.464509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.464587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.464609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:35.762 [2024-07-15 21:47:26.464682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.762 [2024-07-15 21:47:26.464704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:35.762 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.464781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.464803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.464892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.464913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.464996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.465024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.465107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.465129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.465223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.465247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.465343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.465366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.465448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.465479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.465619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.465644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.465720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.465743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.465828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.465851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.465923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.465947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.466024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.466050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.466124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.466153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.466219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.466241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.466323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.466344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.466421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.466442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.466515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.466535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.466608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.466634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.466710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.466732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.466808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.466833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.466919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.466944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.467028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.467054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.467158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.467182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.467259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.467281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.467358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.467381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.467451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.467475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.467551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.467572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.467651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.467675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.467753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.467775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.467846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.052 [2024-07-15 21:47:26.467868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.052 qpair failed and we were unable to recover it. 00:25:36.052 [2024-07-15 21:47:26.467946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.467971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.468049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.468071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.468154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.468177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.468259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.468282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.468356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.468378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.468456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.468478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.468563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.468585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.468657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.468680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.468760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.468790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.468865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.468888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.468959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.468987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.469067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.469091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.469175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.469199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.469386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.469411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.469486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.469512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.469597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.469620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.469694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.469717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.469809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.469845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.469990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.470015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.470084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.470105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.470191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.470213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.470291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.470312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.470386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.470407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.470479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.470501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.470568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.470589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.470663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.470685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.470765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.470787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.470859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.470883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.470967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.470992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.471067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.471090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.471169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.471216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.471429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.471459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.471550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.471572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.471647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.471668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.471747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.471775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.471852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.471874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.471960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.471984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.472064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.472087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.472164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.472199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.472290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.472322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.472427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.053 [2024-07-15 21:47:26.472476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.053 qpair failed and we were unable to recover it. 00:25:36.053 [2024-07-15 21:47:26.472596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.472645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.472766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.472821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.472923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.472969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.473068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.473105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.473230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.473256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.473330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.473352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.473539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.473574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.473661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.473683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.473763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.473788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.473911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.473935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.474009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.474031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.474104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.474125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.474203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.474225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.474297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.474323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.474396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.474419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.474494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.474517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.474591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.474613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.474684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.474705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.474775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.474797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.474872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.474897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.474970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.474992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.475059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.475081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.475166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.475189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.475292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.475315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.475380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.475402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.475471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.475493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.475572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.475602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.475683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.475707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.475833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.475856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.475929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.475952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.476025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.476047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.476117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.476144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.476223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.476254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.476329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.476352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.476427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.476449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.476521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.476542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.476619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.476641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.476711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.476733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.476798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.476820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.476894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.476915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.054 [2024-07-15 21:47:26.476989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.054 [2024-07-15 21:47:26.477013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.054 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.477086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.477107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.477198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.477233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.477327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.477352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.477431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.477456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.477536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.477559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.477631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.477654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.477726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.477748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.477824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.477848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.477926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.477952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.478031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.478057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.478127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.478160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.478234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.478255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.478322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.478344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.478430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.478451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.478525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.478549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.478631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.478655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.478724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.478750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.478829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.478855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.479043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.479067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.479151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.479175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.479359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.479383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.479455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.479477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.479544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.479566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.479642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.479665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.479735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.479758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.479835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.479857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.479939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.479962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.480153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.480201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.480306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.480351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.480458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.480483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.480566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.480589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.480658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.480680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.480759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.480782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.480855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.480878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.480955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.480979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.481050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.481071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.481147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.481169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.481245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.481267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.481337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.481359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.481441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.055 [2024-07-15 21:47:26.481471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.055 qpair failed and we were unable to recover it. 00:25:36.055 [2024-07-15 21:47:26.481552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.481579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.481668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.481692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.481766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.481787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.481855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.481876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.481946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.481969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.482048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.482071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.482157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.482183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.482259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.482282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.482354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.482377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.482447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.482469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.482548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.482572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.482640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.482662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.482744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.482765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.482847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.482869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.482943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.482967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.483047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.483071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.483158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.483184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.483260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.483282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.483360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.483382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.483457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.483479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.483561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.483588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.483667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.483692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.483771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.483794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.483868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.483890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.483967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.483989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.484061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.484101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.484203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.484229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.484312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.484333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.484401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.484422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.484618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.484646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.484734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.484756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.484827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.484848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.484916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.056 [2024-07-15 21:47:26.484938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.056 qpair failed and we were unable to recover it. 00:25:36.056 [2024-07-15 21:47:26.485030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.485056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.485145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.485168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.485236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.485258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.485326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.485348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.485415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.485437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.485509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.485532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.485601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.485623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.485702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.485723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.485792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.485814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.485889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.485910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.485984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.486005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.486079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.486104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.486180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.486202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.486274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.486297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.486363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.486384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.486456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.486480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.486673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.486700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.486777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.486801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.486889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.486942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.487013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.487036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.487113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.487136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.487212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.487237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.487313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.487336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.487415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.487437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.487503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.487525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.487597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.487623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.487704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.487728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.487804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.487827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.488022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.488055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.488147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.488200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.488296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.488339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.488435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.488482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.488579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.488630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.488746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.488809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.488887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.488920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.489009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.489034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.489104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.489127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.489205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.489228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.489321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.489354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.489437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.489459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.489530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.057 [2024-07-15 21:47:26.489555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.057 qpair failed and we were unable to recover it. 00:25:36.057 [2024-07-15 21:47:26.489630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.489653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.489732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.489755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.489826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.489849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.489924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.489946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.490015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.490037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.490113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.490136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.490216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.490239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.490309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.490331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.490410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.490433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.490506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.490529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.490601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.490624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.490701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.490724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.490790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.490813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.490891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.490914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.490986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.491010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.491084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.491116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.491206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.491249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.491349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.491387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.491484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.491569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.491680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.491714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.491799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.491827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.491918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.491946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.492028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.492051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.492116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.492152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.492231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.492263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.492350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.492405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.492500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.492550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.492651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.492704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.492796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.492845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.492941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.492987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.493083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.493121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.493225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.493251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.493330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.493381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.493471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.493513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.493601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.493638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.493734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.493774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.493871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.493927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.494019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.494072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.494186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.494231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.494323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.494370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.494481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.058 [2024-07-15 21:47:26.494507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.058 qpair failed and we were unable to recover it. 00:25:36.058 [2024-07-15 21:47:26.494593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.494622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.494698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.494736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.494835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.494875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.494961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.494998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.495096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.495136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.495266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.495302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.495415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.495471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.495546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.495603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.495709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.495750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.495833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.495880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.496114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.496152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.496248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.496289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.496373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.496411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.496519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.496560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.496652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.496693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.496791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.496819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.496934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.496983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.497065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.497117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.497228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.497303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.497403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.497446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.497549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.497590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.497823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.497868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.497984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.498012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.498144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.498185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.498293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.498336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.498443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.498495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.498599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.498652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.498746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.498794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.498893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.498937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.499036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.499059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.499130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.499187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.499299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.499342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.499438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.499477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.499575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.499614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.499713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.499752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.499839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.499876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.499971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.499993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.500070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.500093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.500181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.500204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.500280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.500302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.059 qpair failed and we were unable to recover it. 00:25:36.059 [2024-07-15 21:47:26.500378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.059 [2024-07-15 21:47:26.500400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.500471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.500493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.500568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.500590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.500668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.500692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.500764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.500787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.500867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.500889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.500968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.500990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.501061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.501084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.501166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.501189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.501266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.501288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.501366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.501388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.501461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.501483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.501557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.501580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.501656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.501680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.501761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.501784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.501852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.501875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.501950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.501973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.502050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.502073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.502155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.502181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.502248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.502271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.502341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.502363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.502436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.502458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.502523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.502545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.502620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.502643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.502719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.502742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.502814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.502838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.502913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.502951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.503052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.503088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.503189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.503228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.503327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.503369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.503475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.503527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.503621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.503659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.503772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.503822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.503917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.503955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.504066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.504103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.060 qpair failed and we were unable to recover it. 00:25:36.060 [2024-07-15 21:47:26.504207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.060 [2024-07-15 21:47:26.504244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.504344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.504385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.504480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.504517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.504615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.504654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.504748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.504771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.504842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.504883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.504987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.505013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.505095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.505120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.505212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.505266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.505361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.505410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.505530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.505567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.505677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.505726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.505798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.505822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.505894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.505917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.505988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.506011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.506086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.506109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.506214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.506248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.506340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.506372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.506456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.506481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.506560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.506584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.506656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.506680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.506760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.506783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.506860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.506882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.506955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.506982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.507062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.507085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.507167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.507208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.507310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.507332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.507418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.507443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.507516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.507539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.507615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.507643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.507721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.507761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.507869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.507893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.507981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.508029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.508128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.508186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.508293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.508336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.508437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.508485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.508582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.508628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.508730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.508775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.508888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.508925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.509077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.509124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.509229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.061 [2024-07-15 21:47:26.509276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.061 qpair failed and we were unable to recover it. 00:25:36.061 [2024-07-15 21:47:26.509367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.509411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.509513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.509563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.509657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.509706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.509819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.509849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.509980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.510023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.510111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.510155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.510256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.510299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.510397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.510438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.510537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.510580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.510679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.510723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.510810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.510852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.510950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.510992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.511091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.511149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.511261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.511310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.511413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.511462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.511560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.511600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.511699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.511740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.511833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.511879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.511985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.512029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.512132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.512169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.512251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.512283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.512378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.512410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.512504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.512540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.512637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.512678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.512776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.512812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.512919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.512941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.513017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.513040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.513111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.513133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.513225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.513249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.513325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.513365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.513461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.513503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.513595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.513635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.513738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.513760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.513828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.513877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.513973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.514019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.514135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.514187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.514290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.514337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.514431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.514470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.514561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.514583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.514660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.514682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.514754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.062 [2024-07-15 21:47:26.514776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.062 qpair failed and we were unable to recover it. 00:25:36.062 [2024-07-15 21:47:26.514854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.514892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.514987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.515010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.515088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.515110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.515189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.515213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.515290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.515312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.515387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.515409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.515486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.515509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.515585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.515609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.515733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.515759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.515839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.515864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.515935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.515958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.516030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.516053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.516145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.516183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.516282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.516312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.516389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.516433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.516533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.516574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.516691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.516732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.516828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.516869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.516955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.516977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.517047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.517087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.517194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.517217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.517292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.517333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.517575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.517630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.517844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.517896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.517994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.518020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.518094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.518117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.518203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.518230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.518322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.518355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.518441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.518482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.518580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.518624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.518715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.518738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.518813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.518836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.518908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.518951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.519075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.519111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.519202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.519225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.519313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.519342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.519413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.519457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.519549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.519601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.519707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.519756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.519852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.519892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.519994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.063 [2024-07-15 21:47:26.520040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.063 qpair failed and we were unable to recover it. 00:25:36.063 [2024-07-15 21:47:26.520134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.520179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.520286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.520308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.520380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.520402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.520474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.520510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.520607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.520646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.520745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.520787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.520878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.520920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.521013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.521066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.521167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.521215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.521310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.521353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.521454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.521496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.521597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.521646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.521737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.521781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.521878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.521921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.522014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.522037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.522111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.522166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.522260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.522302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.522398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.522437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.522530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.522573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.522671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.522715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.522804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.522845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.522949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.522997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.523106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.523136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.523285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.523327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.523418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.523441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.523513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.523555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.523657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.523694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.523806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.523850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.523950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.523988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.524082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.524121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.524226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.524267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.524370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.524393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.524457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.524479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.524549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.524592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.524706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.524738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.524832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.524876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.524972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.525010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.525135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.525199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.525316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.525369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.525452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.525476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.525546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.525569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.064 qpair failed and we were unable to recover it. 00:25:36.064 [2024-07-15 21:47:26.525640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.064 [2024-07-15 21:47:26.525663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.525735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.525767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.525865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.525899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.525997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.526039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.526132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.526168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.526254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.526278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.526355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.526381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.526470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.526513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.526607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.526648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.526743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.526765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.526837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.526873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.526978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.527004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.527081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.527113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.527218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.527243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.527321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.527363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.527454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.527498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.527593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.527631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.527723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.527761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.527861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.527902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.527999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.528039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.528144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.528181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.528288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.528331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.528418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.528457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.528557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.528580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.528656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.528694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.528795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.528835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.528931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.528974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.529071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.529099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.529182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.529209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.529286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.529319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.529405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.529450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.529541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.529582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.529694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.529733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.529830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.529877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.065 [2024-07-15 21:47:26.529969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.065 [2024-07-15 21:47:26.529992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.065 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.530063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.530100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.530219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.530263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.530355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.530378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.530456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.530493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.530581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.530620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.530721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.530756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.530855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.530901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.531012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.531048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.531153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.531203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.531299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.531340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.531437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.531477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.531579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.531627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.531725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.531764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.531860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.531902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.531998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.532038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.532127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.532160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.532228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.532266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.532377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.532417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.532516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.532555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.532649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.532689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.532794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.532822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.532905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.532938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.533057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.533109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.533226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.533260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.533357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.533400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.533497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.533535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.533636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.533658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.533732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.533754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.533829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.533852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.533922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.533944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.534012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.534050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.534160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.534214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.534311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.534335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.534410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.534432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.534504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.534527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.534600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.534622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.534698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.534720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.534793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.534821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.534913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.534946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.066 [2024-07-15 21:47:26.535021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.066 [2024-07-15 21:47:26.535047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.066 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.535122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.535157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.535239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.535263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.535334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.535357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.535434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.535458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.535540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.535587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.535684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.535728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.535831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.535872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.535975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.536011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.536113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.536175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.536297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.536339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.536434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.536471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.536567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.536611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.536710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.536750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.536850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.536879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.536973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.537005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.537096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.537154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.537271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.537309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.537446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.537494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.537583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.537623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.537728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.537768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.537871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.537909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.538006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.538047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.538158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.538199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.538298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.538338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.538435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.538473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.538572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.538615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.538709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.538746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.538842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.538881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.538980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.539002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.539081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.539109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.539199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.539233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.539324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.539369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.539463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.539503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.539620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.539652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.539752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.539774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.539851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.067 [2024-07-15 21:47:26.539895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.067 qpair failed and we were unable to recover it. 00:25:36.067 [2024-07-15 21:47:26.539997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.540035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.540141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.540177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.540293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.540346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.540436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.540477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.540581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.540604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.540679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.540728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.540823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.540863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.540962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.541003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.541099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.541149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.541243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.541284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.541376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.541420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.541515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.541563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.541674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.541709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.541806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.541862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.541963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.542009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.542102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.542128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.542214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.542256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.542383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.542423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.542510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.542553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.542654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.542694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.542804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.542843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.542939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.542962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.543038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.543073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.543182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.543221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.543318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.543358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.543452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.543492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.543589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.543612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.543685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.543708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.543787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.543810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.543898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.543924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.544013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.068 [2024-07-15 21:47:26.544037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.068 qpair failed and we were unable to recover it. 00:25:36.068 [2024-07-15 21:47:26.544107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.544159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.544250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.544275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.544353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.544397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.544526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.544559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.544656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.544684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.544766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.544811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.544902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.544943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.545058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.545093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.545198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.545242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.545340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.545393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.545487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.545527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.545631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.545660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.545760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.545816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.545917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.545952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.546041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.546084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.546198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.546239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.546336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.546358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.546425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.546467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.546570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.546614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.546705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.546740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.546836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.546897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.546995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.547038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.547136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.547168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.547243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.547276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.547494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.547541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.547630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.547683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.547781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.547817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.547913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.547935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.548003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.548027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.548106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.548129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.548224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.548249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.548332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.548356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.548437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.548481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.069 [2024-07-15 21:47:26.548570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.069 [2024-07-15 21:47:26.548611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.069 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.548735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.548769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.548869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.548912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.549004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.549050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.549185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.549208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.549314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.549336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.549431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.549467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.549587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.549611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.549739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.549786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.549884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.549922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.550029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.550053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.550128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.550164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.550247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.550283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.550377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.550421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.550522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.550564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.550658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.550698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.550794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.550818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.550915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.550939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.551021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.551068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.551177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.551201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.551283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.551325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.551437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.551471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.551560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.551599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.551696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.551735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.551834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.551877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.551970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.552013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.552112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.552162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.552264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.552308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.552405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.552451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.552562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.552598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.552689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.552728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.552820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.552860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.552961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.553004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.553101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.553151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.553247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.553293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.553389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.553432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.553533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.553562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.553658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.553691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.553779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.553821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.553934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.553957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.554050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.554087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.070 [2024-07-15 21:47:26.554192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.070 [2024-07-15 21:47:26.554216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.070 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.554303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.554349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.554450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.554493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.554588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.554632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.554756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.554788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.554883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.554924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.555018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.555059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.555153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.555190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.555272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.555310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.555414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.555458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.555547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.555586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.555685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.555725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.555828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.555868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.555962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.556002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.556100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.556124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.556218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.556265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.556374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.556416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.556507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.556554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.556651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.556691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.556791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.556815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.556911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.556956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.557069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.557102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.557208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.557232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.557308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.557334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.557431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.557465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.557557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.557598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.557718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.557757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.557861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.557899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.558004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.558051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.558162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.558210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.558309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.558348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.558482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.071 [2024-07-15 21:47:26.558516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.071 qpair failed and we were unable to recover it. 00:25:36.071 [2024-07-15 21:47:26.558610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.558638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.558725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.558761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.558873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.558907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.559006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.559049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.559160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.559203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.559301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.559341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.559444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.559486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.559599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.559641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.559747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.559794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.559891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.559934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.560046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.560095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.560217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.560247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.560354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.560388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.560486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.560524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.560625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.560669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.560765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.560805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.560906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.560952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.561045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.561084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.561204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.561238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.561350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.561387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.561497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.561537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.561640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.561687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.561805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.561853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.561958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.562001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.562103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.562157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.562274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.562321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.562428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.562468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.562587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.562614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.562711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.562743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.562843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.562889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.563005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.563053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.563166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.563209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.563314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.563341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.563434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.563476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.563597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.563633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.563739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.563784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.563887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.563934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.564040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.564080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.564227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.564264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.564382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.564428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.072 [2024-07-15 21:47:26.564551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.072 [2024-07-15 21:47:26.564594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.072 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.564740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.564774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.564890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.564925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.565028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.565074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.565182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.565222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.565329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.565368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.565471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.565515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.565628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.565674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.565781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.565828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.565925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.565969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.566071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.566113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.566231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.566255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.566349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.566390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.566498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.566525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.566611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.566653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.566760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.566804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.566903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.566949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.567070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.567125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.567251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.567296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.567406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.567451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.567568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.567617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.567716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.567741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.567872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.567906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.568005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.568047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.568159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.568202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.568305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.568355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.568460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.568503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.568600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.568643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.568738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.568764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.568853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.568900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.569007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.569053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.569163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.569189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.569270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.569296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.569391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.569431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.569538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.569582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.073 [2024-07-15 21:47:26.569687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.073 [2024-07-15 21:47:26.569730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.073 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.569844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.569888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.569981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.570007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.570089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.570131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.570260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.570286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.570368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.570411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.570511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.570554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.570656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.570701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.570801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.570827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.570911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.570953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.571063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.571106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.571224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.571267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.571375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.571416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.571524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.571567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.571673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.571712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.571817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.571863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.572020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.572073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.572193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.572263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.572389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.572419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.572505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.572535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.572647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.572682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.572806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.572852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.573000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.573035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.573130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.573167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.573301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.573341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.573474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.573519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.573629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.573672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.573809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.573836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.573922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.573962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.574082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.574119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.574243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.574281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.574371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.574418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.574524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.574570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.074 [2024-07-15 21:47:26.574672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.074 [2024-07-15 21:47:26.574716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.074 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.574830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.574877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.574995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.575041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.575149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.575176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.575313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.575346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.575451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.575492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.575650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.575683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.575789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.575835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.575938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.575965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.576056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.576094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.576203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.576232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.576332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.576358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.576458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.576488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.576576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.576603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.576719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.576747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.576835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.576861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.576947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.576973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.577079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.577121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.577265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.577297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.577387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.577414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.577505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.577531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.577660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.577704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.577802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.577839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.577952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.577991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.578102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.578132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.578248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.578297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.578453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.578490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.578591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.578649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.578773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.578833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.579017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.579064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.579239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.579277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.579382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.579408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.579492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.579543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.579678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.579715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.579823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.579849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.579970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.580007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.580109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.580165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.580275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.580302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.580443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.580479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.580575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.580601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.075 [2024-07-15 21:47:26.580686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.075 [2024-07-15 21:47:26.580735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.075 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.580863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.580899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.581001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.581034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.581150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.581196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.581295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.581337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.581435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.581478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.581587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.581634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.581775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.581810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.581917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.581960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.582086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.582129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.582264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.582301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.582434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.582473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.582599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.582640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.582772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.582806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.582929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.582967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.583086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.583132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.583256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.583296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.583423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.583457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.583558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.583584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.583661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.583713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.583835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.583872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.583971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.583998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.584096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.584153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.584261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.584316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.584417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.584474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.584598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.584656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.584785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.584821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.584925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.584952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.585045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.585071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.585153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.585197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.585298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.585345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.585456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.585483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.076 qpair failed and we were unable to recover it. 00:25:36.076 [2024-07-15 21:47:26.585561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.076 [2024-07-15 21:47:26.585608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.585711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.585765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.585876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.585910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.586006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.586032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.586122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.586159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.586246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.586289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.586397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.586442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.586562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.586602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.586730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.586773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.586878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.586925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.587030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.587057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.587149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.587192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.587300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.587353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.587467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.587509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.587607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.587649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.587745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.587795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.587895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.587937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.588036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.588062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.588155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.588203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.588318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.588360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.588466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.588513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.588648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.588689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.588792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.588839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.588944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.588974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.589058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.589084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.589177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.589224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.589333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.589383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.589482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.589527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.589637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.589682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.589916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.589967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.590193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.590251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.590387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.590443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.590543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.590586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.590709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.590753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.077 qpair failed and we were unable to recover it. 00:25:36.077 [2024-07-15 21:47:26.590850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.077 [2024-07-15 21:47:26.590892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.591003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.591050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.591179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.591232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.591365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.591423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.591542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.591600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.591741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.591788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.591930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.591977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.592107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.592156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.592274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.592304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.592400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.592437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.592540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.592577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.592679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.592715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.592826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.592865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.592958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.592987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.593065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.593094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.593191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.593223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.593320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.593347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.593435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.593462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.593542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.593575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.593663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.593710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.593813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.593857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.593955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.593990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.594071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.594119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.594233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.594260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.594353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.594380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.594467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.594513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.594616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.594667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.594802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.594850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.594964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.594995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.595103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.595136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.595257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.595283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.595365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.595392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.595473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.595505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.595619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.595656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.595765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.595801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.595895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.595924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.596014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.596041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.596117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.596151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.596238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.596269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.596358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.596385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.596469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.078 [2024-07-15 21:47:26.596496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.078 qpair failed and we were unable to recover it. 00:25:36.078 [2024-07-15 21:47:26.596582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.596609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.596707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.596734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.596818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.596844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.596924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.596960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.597056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.597098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.597228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.597275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.597394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.597441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.597555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.597599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.597726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.597761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.597869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.597916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.598025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.598052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.598147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.598189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.598303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.598330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.598422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.598464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.598573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.598616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.598734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.598780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.598888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.598930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.599037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.599070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.599155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.599201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.599298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.599323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.599427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.599470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.599580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.599624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.599732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.599773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.599886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.599943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.600046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.600077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.600161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.600189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.600284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.600332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.600438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.600481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.600590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.600633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.600741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.600784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.600895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.600922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.601010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.601043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.601121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.601187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.601295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.079 [2024-07-15 21:47:26.601338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.079 qpair failed and we were unable to recover it. 00:25:36.079 [2024-07-15 21:47:26.601461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.601489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.601586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.601622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.601725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.601774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.601875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.601901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.601991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.602039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.602159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.602203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.602311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.602359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.602455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.602497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.602632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.602687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.602768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.602803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.602885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.602919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.603032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.603071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.603189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.603215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.603307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.603367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.603465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.603513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.603616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.603657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.603778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.603825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.603965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.604001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.604099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.604125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.604221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.604269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.604369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.604417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.604522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.604548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.604640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.604684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.604785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.604831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.604933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.604991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.605099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.605179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.605302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.605340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.605444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.605471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.605554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.605597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.605705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.605756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.605864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.605913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.606066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.606101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.606238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.606269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.606365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.606391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.606476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.606505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.606590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.606627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.606764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.606819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.080 [2024-07-15 21:47:26.606925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.080 [2024-07-15 21:47:26.606962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.080 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.607064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.607102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.607257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.607293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.607413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.607453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.607562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.607592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.607681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.607707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.607797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.607830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.607932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.607959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.608041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.608068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.608198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.608226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.608309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.608335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.608412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.608437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.608528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.608559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.608655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.608681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.608771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.608807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.608888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.608914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.609045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.609083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.609198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.609246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.609350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.609376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.609461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.609488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.609588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.609619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.609708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.609734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.609824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.609852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.609959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.609988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.610071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.610097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.610205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.610236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.610333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.610360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.610439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.610465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.610587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.610616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.610711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.610757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.610873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.610929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.611097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.611135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.611263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.611308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.611478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.611533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.081 [2024-07-15 21:47:26.611637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.081 [2024-07-15 21:47:26.611686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.081 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.611799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.611844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.611956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.611996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.612154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.612202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.612369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.612404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.612509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.612553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.612709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.612744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.612865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.612895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.613044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.613078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.613239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.613266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.613371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.613416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.613518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.613559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.613714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.613749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.613858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.613904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.614012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.614060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.614193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.614229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.614339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.614381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.614533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.614568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.614711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.614745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.614894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.614929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.615049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.615084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.615212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.615253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.615355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.615395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.615484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.615524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.615644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.615689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.615799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.615844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.616004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.616054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.616193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.616238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.616376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.616413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.616526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.616573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.616682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.616729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.616832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.616873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.617013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.617048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.617181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.617235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.617358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.617409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.617568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.617618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.617739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.617789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.617936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.617970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.618096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.618150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.618259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.618309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.618409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.082 [2024-07-15 21:47:26.618451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.082 qpair failed and we were unable to recover it. 00:25:36.082 [2024-07-15 21:47:26.618553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.618580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.618661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.618702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.618806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.618869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.618971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.619014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.619120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.619171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.619293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.619329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.619440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.619482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.619594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.619640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.619739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.619781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.619890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.619932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.620036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.620082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.620210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.620250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.620362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.620396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.620515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.620541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.620643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.620688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.620798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.620845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.620945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.620974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.621065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.621103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.621201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.621249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.621335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.621378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.621497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.621541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.621642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.621682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.621788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.621815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.621909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.621955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.622064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.622095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.622210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.622265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.622394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.622440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.622541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.083 [2024-07-15 21:47:26.622572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.083 qpair failed and we were unable to recover it. 00:25:36.083 [2024-07-15 21:47:26.622676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.622727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.622833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.622879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.622980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.623031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.623151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.623198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.623310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.623354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.623460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.623511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.623627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.623672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.623778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.623821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.623943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.623993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.624113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.624168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.624293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.624325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.624452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.624480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.624571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.624598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.624685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.624714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.624807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.624852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.624975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.625010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.625151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.625209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.625335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.625384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.625495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.625542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.625657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.625705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.625805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.625850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.625959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.626008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.626132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.626185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.626304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.626340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.626455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.626502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.626613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.626663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.626767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.626815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.626924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.626967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.627078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.627119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.627258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.627302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.627408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.627455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.627558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.627605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.627705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.627750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.627857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.627905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.628029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.628066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.628176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.628203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.628314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.084 [2024-07-15 21:47:26.628358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.084 qpair failed and we were unable to recover it. 00:25:36.084 [2024-07-15 21:47:26.628470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.628502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.628605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.628647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.628733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.628777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.628893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.628937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.629072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.629113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.629243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.629301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.629424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.629462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.629580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.629623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.629755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.629794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.629908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.629955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.630071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.630120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.630254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.630281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.630376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.630421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.630538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.630582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.630722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.630760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.630867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.630913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.631015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.631063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.631188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.631228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.631317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.631344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.631440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.631478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.631569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.631613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.631742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.631781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.631906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.631943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.632044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.632085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.632209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.632256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.632364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.632412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.632539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.632576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.632694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.632740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.632843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.632887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.632996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.633044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.633149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.633177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.085 [2024-07-15 21:47:26.633264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.085 [2024-07-15 21:47:26.633307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.085 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.633429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.633468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.633563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.633610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.633721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.633749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.633840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.633887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.634000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.634051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.634165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.634215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.634333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.634381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.634502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.634528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.634626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.634680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.634784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.634811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.634901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.634942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.635063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.635113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.635246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.635288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.635416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.635463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.635567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.635611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.635717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.635744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.635841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.635886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.635998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.636026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.636122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.636214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.636384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.636422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.636533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.636563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.636669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.636709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.636817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.636853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.636963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.637008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.637128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.637189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.637317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.637353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.637454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.637498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.637612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.637659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.637775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.637818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.637931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.637976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.638078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.638124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.638302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.638338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.638443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.638494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.638593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.638642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.638802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.638839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.638960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.638997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.639116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.639162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.639269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.639314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.639470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.086 [2024-07-15 21:47:26.639507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.086 qpair failed and we were unable to recover it. 00:25:36.086 [2024-07-15 21:47:26.639654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.639691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.639849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.639885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.639995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.640023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.640126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.640189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.640312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.640357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.640529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.640567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.640695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.640740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.640901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.640937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.641091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.641128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.641252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.641299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.641392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.641448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.641562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.641614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.641727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.641779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.641888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.641937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.642044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.642095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.642209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.642258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.642389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.642425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.642534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.642580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.642746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.642783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.642896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.642935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.643033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.643076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.643192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.643238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.643359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.643407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.643512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.643558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.643674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.643723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.643825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.643869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.643993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.644029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.644151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.644190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.644288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.644333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.644482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.644521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.644628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.644675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.644829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.644879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.645032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.645072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.645204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.645246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.645347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.087 [2024-07-15 21:47:26.645437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.087 qpair failed and we were unable to recover it. 00:25:36.087 [2024-07-15 21:47:26.645562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.645609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.645727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.645774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.645909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.645937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.646038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.646088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.646210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.646239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.646324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.646371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.646479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.646526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.646636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.646686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.646786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.646830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.646948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.646994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.647102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.647158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.647259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.647306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.647414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.647461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.647568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.647616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.647725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.647786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.647901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.647944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.648054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.648099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.648231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.648282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.648400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.648446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.648571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.648627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.648741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.648772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.648870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.648930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.649060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.649110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.649229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.649274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.649391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.649432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.088 [2024-07-15 21:47:26.649552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.088 [2024-07-15 21:47:26.649589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.088 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.649705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.649735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.649841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.649887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.650008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.650037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.650124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.650188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.650330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.650370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.650481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.650525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.650632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.650677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.650791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.650831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.650941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.650973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.651076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.651125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.651258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.651305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.651419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.651448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.651554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.651593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.651701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.651746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.651870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.651917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.652056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.652093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.652208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.652254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.652361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.652409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.652514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.652562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.652677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.652721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.652839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.652886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.652994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.653023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.653119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.653155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.653250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.653293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.653409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.653458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.653563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.653607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.653741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.653778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.653884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.653914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.654009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.654043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.654175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.654213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.089 qpair failed and we were unable to recover it. 00:25:36.089 [2024-07-15 21:47:26.654367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.089 [2024-07-15 21:47:26.654403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.654517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.654562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.654674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.654721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.654842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.654892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.655013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.655060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.655177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.655221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.655344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.655395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.655504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.655561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.655672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.655718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.655830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.655874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.656041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.656077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.656217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.656255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.656370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.656414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.656531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.656579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.656705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.656759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.656874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.656925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.657082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.657121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.657292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.657344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.657456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.657505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.657678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.657715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.657830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.657886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.658019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.658055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.658186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.658235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.658347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.658397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.658508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.658553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.658696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.658735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.658849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.658896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.659013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.659067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.659178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.659225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.659339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.659385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.659499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.659543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.659673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.659710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.659818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.659848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.659956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.659994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.660177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.660231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.660348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.660378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.660477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.660506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.660602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.660632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.660722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.090 [2024-07-15 21:47:26.660756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.090 qpair failed and we were unable to recover it. 00:25:36.090 [2024-07-15 21:47:26.660856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.660886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.661018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.661047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.661143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.661173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.661310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.661347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.661464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.661511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.661628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.661681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.661793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.661822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.661937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.661985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.662097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.662127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.662244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.662295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.662423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.662463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.662579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.662623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.662740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.662784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.662915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.662946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.663060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.663097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.663210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.663257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.663390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.663425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.663536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.663591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.663710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.663767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.663881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.663926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.664095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.664134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.664316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.664356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.664475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.664521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.664645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.664696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.664817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.664865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.664979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.665029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.665168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.665233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.665354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.665412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.665606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.665666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.665832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.665887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.666050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.666113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.666295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.666352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.666474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.666527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.666641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.666688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.666813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.666862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.666971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.667021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.667184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.667222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.667385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.667423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.667593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.667630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.667772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.667819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.091 [2024-07-15 21:47:26.667987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.091 [2024-07-15 21:47:26.668038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.091 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.668153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.668198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.668382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.668427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.668621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.668662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.668816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.668855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.668972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.669023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.669144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.669200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.669312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.669342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.669494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.669533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.669653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.669703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.669821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.669869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.669983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.670034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.670148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.670197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.670336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.670377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.670508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.670548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.670664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.670714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.670844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.670885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.670991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.671040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.671162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.671214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.671373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.671412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.671532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.671577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.671694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.671742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.671923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.671966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.672119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.672180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.672300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.672346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.672459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.672489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.672593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.672629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.672770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.672803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.672954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.672993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.673103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.673153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.673264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.673295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.673408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.673459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.673581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.673630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.673751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.673800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.673953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.674002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.674155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.674197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.674347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.674388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.674500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.674557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.674702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.674742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.674871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.674929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.092 [2024-07-15 21:47:26.675057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.092 [2024-07-15 21:47:26.675130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.092 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.675341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.675398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.675553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.675604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.675719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.675769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.675912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.675951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.676057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.676107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.676239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.676295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.676408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.676455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.676566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.676613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.676742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.676792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.676908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.676957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.677070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.677106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.677224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.677273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.677397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.677457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.677576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.677622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.677731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.677777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.677892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.677942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.678057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.678106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.678234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.678284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.678400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.678448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.678560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.678606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.678722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.678773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.678884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.678929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.679035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.679082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.679205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.679256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.679367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.679397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.679490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.679543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.679661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.679711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.679837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.679877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.680007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.680046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.680161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.680213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.680328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.680358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.680448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.680499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.680613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.093 [2024-07-15 21:47:26.680662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.093 qpair failed and we were unable to recover it. 00:25:36.093 [2024-07-15 21:47:26.680767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.680819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.680943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.680981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.681102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.681154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.681293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.681333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.681441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.681492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.681624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.681672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.681825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.681883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.682039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.682094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.682240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.682281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.682400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.682445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.682554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.682604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.682716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.682768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.682894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.682932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.683039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.683086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.683210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.683258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.683372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.683402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.683512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.683553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.683662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.683712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.683816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.683863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.684002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.684043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.684169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.684222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.684327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.684375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.684490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.684537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.684652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.684702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.684810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.684862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.684972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.685023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.685130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.685167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.685281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.685319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.685427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.685477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.685596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.685641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.685759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.685789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.685881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.685926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.686040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.686099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.686222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.686267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.686379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.686426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.686548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.686598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.686708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.686738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.686844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.686895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.687031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.687072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.687187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.094 [2024-07-15 21:47:26.687240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.094 qpair failed and we were unable to recover it. 00:25:36.094 [2024-07-15 21:47:26.687378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.687422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.687549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.687588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.687704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.687750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.687871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.687921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.688028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.688079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.688201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.688257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.688400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.688444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.688562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.688608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.688727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.688777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.688896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.688952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.689067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.689120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.689245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.689300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.689427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.689472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.689585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.689631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.689754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.689807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.689925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.689970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.690088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.690146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.690265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.690319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.690437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.690490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.690615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.690666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.690787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.690840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.690951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.690999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.691135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.691186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.691305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.691356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.691467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.691518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.691640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.691692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.691805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.691835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.691932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.691962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.692051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.692102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.692278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.692331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.692446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.692494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.692614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.692662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.692772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.692827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.692941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.692994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.693111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.693190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.693339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.693396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.693519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.693550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.693664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.693713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.693851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.693903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.694041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.694104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.694249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.694292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.095 [2024-07-15 21:47:26.694432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.095 [2024-07-15 21:47:26.694477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.095 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.694621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.694679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.694827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.694886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.695052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.695111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.695277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.695336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.695488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.695551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.695698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.695760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.695906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.695954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.696098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.696150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.696264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.696302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.696406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.696459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.696575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.696628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.696747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.696798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.696946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.696990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.697124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.697180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.697310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.697360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.697487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.697540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.697680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.697723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.697867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.697930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.698134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.698210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.698367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.698409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.698595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.698640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.698761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.698798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.698913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.698970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.699092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.699150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.699282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.699323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.699458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.699490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.699588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.699621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.699740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.699790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.699939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.699999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.700176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.700241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.700394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.700457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.700580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.700630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.700793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.700835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.700951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.701005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.701126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.701195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.701351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.701404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.701533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.701575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.701695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.701749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.701864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.701917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.702044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.702100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.702307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.702351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.702506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.702563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.702685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.702735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.096 [2024-07-15 21:47:26.702863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.096 [2024-07-15 21:47:26.702924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.096 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.703097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.703172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.703334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.703385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.703509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.703566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.703684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.703717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.703816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.703867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.703979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.704043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.704235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.704296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.704452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.704512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.704703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.704757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.704931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.704973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.705088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.705134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.705319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.705365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.705492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.705544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.705708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.705773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.705930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.706001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.706201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.706260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.706451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.706502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.706628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.706682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.706816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.706850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.706959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.707017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.707161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.707192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.707364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.707425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.707605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.707647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.707774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.707829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.707954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.708013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.708156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.708211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.708331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.708382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.708571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.708614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.708792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.708835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.709016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.709069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.709216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.709272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.709436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.709475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.709633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.709677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.709817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.709867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.710028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.710077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.710231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.710263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.097 [2024-07-15 21:47:26.710383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.097 [2024-07-15 21:47:26.710434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.097 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.710572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.710613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.710735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.710784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.710916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.710970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.711118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.711184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.711297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.711346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.711492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.711527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.711640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.711694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.711823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.711879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.712015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.712070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.712201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.712250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.712383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.712437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.712583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.712637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.712757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.712809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.712921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.712975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.713125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.713194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.713312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.713366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.713489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.713549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.713681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.713712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.713864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.713920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.714039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.714096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.714263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.714332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.714470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.714527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.714658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.714712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.714827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.714878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.714992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.715041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.715225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.715289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.715458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.715506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.715628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.715661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.715780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.715836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.715962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.716018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.716157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.716241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.716381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.098 [2024-07-15 21:47:26.716414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.098 qpair failed and we were unable to recover it. 00:25:36.098 [2024-07-15 21:47:26.716534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.716565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.716683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.716740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.716900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.716943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.717074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.717105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.717238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.717288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.717408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.717460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.717577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.717610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.717716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.717773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.717928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.717972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.718087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.718150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.718278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.718330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.718463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.718497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.718598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.718656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.718791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.718851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.718974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.719023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.719156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.719211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.719334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.719387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.719510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.719565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.719674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.719723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.719887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.719934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.720068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.720112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.720246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.720300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.720433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.720486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.720600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.720648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.720775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.720843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.720966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.721023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.721188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.721233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.721348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.721401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.721556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.721603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.721715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.721758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.721888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.721938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.722073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.722128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.722286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.722330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.722444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.722496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.722614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.722666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.722788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.722840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.722974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.723062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.723243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.723307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.723496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.723557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.723712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.723791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.723927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.723977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.724093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.724156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.724281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.724335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.724459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.724508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.724634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.724685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.724804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.099 [2024-07-15 21:47:26.724858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.099 qpair failed and we were unable to recover it. 00:25:36.099 [2024-07-15 21:47:26.724982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.725032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.725165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.725214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.725341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.725401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.725535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.725589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.725713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.725768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.725938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.726004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.726281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.726345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.726516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.726589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.726742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.726798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.726922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.726972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.727087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.727157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.727293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.727335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.727445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.727478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.727590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.727635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.727784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.727829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.727955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.728005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.728131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.728207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.728346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.728404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.728529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.728592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.728726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.728785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.728926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.728968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.729094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.729187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.729321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.729365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.729506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.729550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.729685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.729727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.729861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.729907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.730042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.730086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.730229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.730277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.730414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.730460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.730575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.730626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.730758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.730816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.730952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.731004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.731148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.731205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.731337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.731372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.731502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.731548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.731673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.731738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.731868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.731936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.732109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.732193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.732358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.732406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.732538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.732595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.732733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.732777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.732925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.732971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.733092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.733188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.733312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.733369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.733483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.733518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.733643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.733701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.733828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.733885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.100 [2024-07-15 21:47:26.734018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.100 [2024-07-15 21:47:26.734080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.100 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.734222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.734286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.734461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.734522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.734735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.734797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.735072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.735135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.735301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.735373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.735544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.735605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.735764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.735825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.736043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.736089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.736220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.736275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.736402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.736451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.736584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.736638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.736768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.736819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.736961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.737010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.737174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.737236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.737378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.737441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.737563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.737625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.737750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.737803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.737982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.738026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.738168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.738226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.738353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.738408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.738532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.738585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.738768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.738814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.738954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.738997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.739149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.739197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.739335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.739370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.739488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.739545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.739685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.739731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.739858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.739912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.740097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.740155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.740304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.740351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.740489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.740526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.740644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.740693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.740820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.740873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.741002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.741054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.741181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.741235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.741364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.741421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.741600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.741644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.741825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.741876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.101 qpair failed and we were unable to recover it. 00:25:36.101 [2024-07-15 21:47:26.742042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.101 [2024-07-15 21:47:26.742112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.742306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.742369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.742549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.742607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.742765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.742812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.742945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.742998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.743129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.743195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.743324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.743379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.743510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.743546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.743680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.743724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.743854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.743904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.744086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.744132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.744275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.744332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.744473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.744518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.744664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.744720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.744858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.744913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.745051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.745103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.745256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.745312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.745437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.745489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.745608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.745663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.745802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.745852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.746039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.746083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.746218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.746273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.746449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.746495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.746677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.746723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.746863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.746909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.747045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.747097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.747249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.747317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.747470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.747515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.747694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.747737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.747878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.747921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.748041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.748096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.748307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.748374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.748551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.748603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.748741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.748800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.748923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.748958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.749077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.749124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.102 [2024-07-15 21:47:26.749328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.102 [2024-07-15 21:47:26.749374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.102 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.749498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.749548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.749698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.749741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.749873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.749931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.750079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.750122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.750275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.750321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.750503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.750546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.750678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.750734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.750858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.750913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.751047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.751090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.751280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.751324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.751513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.751556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.751688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.751741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.751868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.751904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.752029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.752073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.752202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.752258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.752381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.752416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.752596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.752643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.752768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.752827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.753027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.753074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.753213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.753270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.753401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.753452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.753599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.753650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.753783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.753832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.753970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.754028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.754152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.754203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.754327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.754380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.754508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.754559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.754692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.754725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.754861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.754907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.755074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.755129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.755276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.755335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.755456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.755512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.755652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.755699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.755821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.755874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.756004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.756048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.756179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.756236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.756376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.756411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.756512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.756566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.756693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.756753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.103 [2024-07-15 21:47:26.756886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.103 [2024-07-15 21:47:26.756936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.103 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.757075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.757134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.757304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.757351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.757480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.757517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.757624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.757683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.757815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.757870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.757985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.758036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.758162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.758216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.758333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.758389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.758521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.758575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.758702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.758758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.758885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.758936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.759064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.759114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.759245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.759297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.759447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.759496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.759630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.759688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.759819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.759875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.760027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.760071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.760215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.760266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.760401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.760458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.760576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.760632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.760761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.760816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.760945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.761003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.761127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.761192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.761315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.761365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.761500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.761553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.761673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.761728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.761864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.761916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.762042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.762092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.762283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.762357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.762544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.762620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.762793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.762858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.763022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.763067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.763205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.763259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.763428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.763512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.763671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.763737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.763908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.763973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.764112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.764185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.764308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.764364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.764491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.764528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.104 [2024-07-15 21:47:26.764640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.104 [2024-07-15 21:47:26.764699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.104 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.764855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.764903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.765034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.765089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.765257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.765311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.765450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.765508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.765640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.765696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.765834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.765904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.766078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.766158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.766335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.766400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.766572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.766622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.766760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.766812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.766944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.766979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.767096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.767176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.767316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.767362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.767479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.767514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.767628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.767686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.767807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.767843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.767956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.767992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.768099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.768136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.768264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.768309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.768435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.768469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.768570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.768627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.768775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.768821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.768959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.769014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.769147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.769201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.769327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.769381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.769518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.769570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.769700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.769754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.769886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.769946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.770074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.770127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.770275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.770344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.770464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.770518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.105 [2024-07-15 21:47:26.770650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.105 [2024-07-15 21:47:26.770703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.105 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.770834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.770893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.773216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.773278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.773428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.773484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.773636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.773692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.773824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.773879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.774012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.774049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.774162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.774221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.774366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.774424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.774551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.774588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.774698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.774754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.774915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.774967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.775114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.775190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.775329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.775386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.775536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.775583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.775711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.775763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.775897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.775955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.776111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.776174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.776306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.776340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.776445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.776495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.776633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.776690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.776823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.776861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.776994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.777041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.777180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.777235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.777377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.777433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.777604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.777663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.777844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.777913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.778085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.106 [2024-07-15 21:47:26.778185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.106 qpair failed and we were unable to recover it. 00:25:36.106 [2024-07-15 21:47:26.778368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.778434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.778606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.778659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.778805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.778844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.778963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.779017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.779172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.779229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.779372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.779412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.779529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.779568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.779687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.779726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.779833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.779871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.779990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.780052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.780204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.780271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.780445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.780500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.780661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.780714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.780853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.780914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.781042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.781095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.781268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.781317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.781449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.781506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.781648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.781703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.781836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.781892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.782038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.782092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.782247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.782303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.782449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.782496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.782644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.782708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.782853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.782910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.783078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.783124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.783284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.783341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.783484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.783541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.783684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.783741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.783878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.783937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.784074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.784129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.784270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.784327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.784475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.784512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.784635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.784672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.784804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.784869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.785072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.785126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.785338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.785387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.785537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.785591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.785733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.785803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.107 [2024-07-15 21:47:26.785946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.107 [2024-07-15 21:47:26.785987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.107 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.786114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.786163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.786287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.786326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.786457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.786498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.786682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.786731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.786876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.786933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.787068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.787107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.787236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.787279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.787411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.787450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.787577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.787615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.787735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.787774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.787954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.788004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.788152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.788211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.788404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.788449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.788636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.788681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.788826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.788882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.789025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.789091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.789252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.789312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.789436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.789476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.789600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.789662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.789802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.789862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.790040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.790097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.790301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.790353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.790479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.790538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.790682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.790739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.790882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.790944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.791090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.791155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.791300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.791340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.791519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.791566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.791705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.791758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.791951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.792017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.792168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.792230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.792368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.792426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.792569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.792623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.792773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.792830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.792990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.793040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.793176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.793232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.793379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.793440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.793572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.793627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.793761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.108 [2024-07-15 21:47:26.793821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.108 qpair failed and we were unable to recover it. 00:25:36.108 [2024-07-15 21:47:26.793970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.794027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.794167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.794222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.794368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.794425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.794564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.794625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.794756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.794814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.794956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.795016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.795171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.795244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.795402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.795480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.795669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.795736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.795889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.795948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.796146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.796194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.796326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.796364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.796485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.796546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.796723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.796805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.796997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.797069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.797263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.797334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.797508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.797558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.797696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.797735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.797849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.797910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.798080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.798131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.798339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.798389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.798533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.798585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.798775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.798821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.798954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.799007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.799163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.799226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.799348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.799405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.799551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.799610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.799736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.799790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.800000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.800058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.800263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.800333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.800526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.800592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.800802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.800850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.800991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.801050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.801201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.801259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.801393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.801450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.801626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.801684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.801888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.801948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.802100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.802162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.802305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.802344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.802470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.802535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.109 [2024-07-15 21:47:26.802678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.109 [2024-07-15 21:47:26.802716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.109 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.802889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.802951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.803091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.803160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.803301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.803358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.803483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.803521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.803660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.803726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.803864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.803923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.804052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.804106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.804262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.804319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.804508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.804569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.804712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.804750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.804869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.804926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.805069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.805128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.805286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.805344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.805479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.805542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.805680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.805738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.805880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.805918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.806051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.806107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.806325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.806387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.806531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.806591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.806794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.806854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.806977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.807032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.807230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.807280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.807420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.807478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.807620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.807682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.807820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.807873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.808054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.808114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.808290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.808343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.808501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.808584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.808818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.808888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.809066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.809173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.809348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.809436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.809588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.809634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.809801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.809854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.810018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.810063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.810218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.810288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.810466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.810524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.810712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.810773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.810934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.810995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.811123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.811218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.110 [2024-07-15 21:47:26.811379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.110 [2024-07-15 21:47:26.811433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.110 qpair failed and we were unable to recover it. 00:25:36.111 [2024-07-15 21:47:26.811593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.111 [2024-07-15 21:47:26.811646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.111 qpair failed and we were unable to recover it. 00:25:36.111 [2024-07-15 21:47:26.811794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.111 [2024-07-15 21:47:26.811857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.111 qpair failed and we were unable to recover it. 00:25:36.111 [2024-07-15 21:47:26.812013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.111 [2024-07-15 21:47:26.812065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.111 qpair failed and we were unable to recover it. 00:25:36.111 [2024-07-15 21:47:26.812205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.111 [2024-07-15 21:47:26.812243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.111 qpair failed and we were unable to recover it. 00:25:36.111 [2024-07-15 21:47:26.812354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.111 [2024-07-15 21:47:26.812412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.111 qpair failed and we were unable to recover it. 00:25:36.111 [2024-07-15 21:47:26.812553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.111 [2024-07-15 21:47:26.812614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.111 qpair failed and we were unable to recover it. 00:25:36.111 [2024-07-15 21:47:26.812795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.111 [2024-07-15 21:47:26.812845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.111 qpair failed and we were unable to recover it. 00:25:36.111 [2024-07-15 21:47:26.812998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.111 [2024-07-15 21:47:26.813047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.111 qpair failed and we were unable to recover it. 00:25:36.111 [2024-07-15 21:47:26.813201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.111 [2024-07-15 21:47:26.813253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.111 qpair failed and we were unable to recover it. 00:25:36.111 [2024-07-15 21:47:26.813382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.111 [2024-07-15 21:47:26.813435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.111 qpair failed and we were unable to recover it. 00:25:36.111 [2024-07-15 21:47:26.813576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.111 [2024-07-15 21:47:26.813641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.111 qpair failed and we were unable to recover it. 00:25:36.111 [2024-07-15 21:47:26.813799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.111 [2024-07-15 21:47:26.813849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.111 qpair failed and we were unable to recover it. 00:25:36.111 [2024-07-15 21:47:26.814006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.111 [2024-07-15 21:47:26.814062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.111 qpair failed and we were unable to recover it. 00:25:36.111 [2024-07-15 21:47:26.814265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.111 [2024-07-15 21:47:26.814331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.814503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.814556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.814691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.814758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.814911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.814963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.815113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.815174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.815328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.815405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.815619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.815691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.815908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.815979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.816164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.816232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.816380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.816442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.816598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.816648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.816786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.816844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.816999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.817057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.817201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.817263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.817421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.817477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.817616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.817675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.817814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.817872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.818011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.818047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.818170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.818208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.818353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.818402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.818536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.818599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.818754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.818808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.818982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.819035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.819175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.819235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.819374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.819434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.819605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.819656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.819805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.819871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.820008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.820045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.820153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.820201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.820346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.820395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.820531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.820587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.820749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.820799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.820952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.821013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.821161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.821222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.821364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.821425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.821565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.821628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.821763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.821800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.821911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.821967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.395 [2024-07-15 21:47:26.822151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.395 [2024-07-15 21:47:26.822221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.395 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.822372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.822435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.822581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.822654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.822798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.822856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.823027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.823065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.823175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.823234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.823395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.823442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.823573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.823629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.823763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.823817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.823942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.823998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.824146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.824203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.824334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.824391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.824525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.824584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.824722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.824779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.824919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.824989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.825114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.825192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.825328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.825387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.825508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.825558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.825690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.825746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.825874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.825932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.826070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.826127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.826269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.826305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.826432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.826480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.826604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.826659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.826796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.826848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.826976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.827030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.827260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.827323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.827451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.827507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.827669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.827719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.827853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.827910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.828046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.828107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.828266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.828322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.828455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.828517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.828670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.828720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.828859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.828909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.829055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.829105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.829236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.829293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.829430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.829490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.829640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.829689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.829812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.829864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.830003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.830064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.396 qpair failed and we were unable to recover it. 00:25:36.396 [2024-07-15 21:47:26.830208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.396 [2024-07-15 21:47:26.830271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.830418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.830475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.830604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.830640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.830753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.830811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.830957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.831009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.831147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.831184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.831306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.831365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.831489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.831526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.831630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.831689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.831819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.831854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.831972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.832034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.832168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.832233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.832403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.832453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.832593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.832659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.832821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.832873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.833003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.833063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.833231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.833305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.833450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.833505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.833659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.833719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.833864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.833913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.834050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.834115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.834255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.834313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.834447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.834508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.834640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.834698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.834826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.834887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.835069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.835117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.835295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.835347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.835497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.835557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.835695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.835755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.835899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.835958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.836097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.836171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.836310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.836368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.836496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.836551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.836702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.836760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.836904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.836968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.837158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.837228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.837410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.837501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.837676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.837732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.837866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.837923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.838081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.838133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.838319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.397 [2024-07-15 21:47:26.838374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.397 qpair failed and we were unable to recover it. 00:25:36.397 [2024-07-15 21:47:26.838564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.838614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.838752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.838815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.838958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.838999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.839127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.839203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.839344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.839403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.839550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.839615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.839752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.839812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.839958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.840016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.840194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.840244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.840383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.840424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.840547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.840605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.840747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.840805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.840947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.841021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.841170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.841233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.841376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.841437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.841578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.841618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.841744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.841804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.841975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.842024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.842175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.842238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.842379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.842419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.842538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.842580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.842709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.842774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.842909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.842965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.843107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.843182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.843327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.843368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.843494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.843538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.843667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.843708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.843822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.843861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.843979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.844017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.844152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.844231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.844430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.844515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.844685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.844746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.844919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.844971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.845154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.845207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.845354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.845415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.845584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.845643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.845808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.845846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.846007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.846063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.846280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.846342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.846506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.846557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.846727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.398 [2024-07-15 21:47:26.846779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.398 qpair failed and we were unable to recover it. 00:25:36.398 [2024-07-15 21:47:26.846957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.847014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.847190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.847249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.847408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.847465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.847614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.847673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.847818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.847877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.848025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.848085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.848302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.848386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.848558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.848611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.848763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.848827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.848969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.849031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.849218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.849280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.849439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.849512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.849658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.849714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.849865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.849925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.850061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.850121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.850277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.850334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.850491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.850540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.850685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.850747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.850893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.850954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.851127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.851188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.851343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.851405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.851570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.851624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.851770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.851811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.851930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.851990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.852134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.852195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.852378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.852436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.852582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.852645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.852813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.852863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.853009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.853070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.853262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.853324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.853492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.853561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.853712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.853766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.399 [2024-07-15 21:47:26.853900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.399 [2024-07-15 21:47:26.853966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.399 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.854116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.854188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.854326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.854393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.854533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.854574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.854703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.854766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.854934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.855008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.855196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.855280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.855443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.855509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.855662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.855721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.855865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.855927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.856078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.856151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.856330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.856390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.856553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.856594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.856772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.856821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.856959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.857018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.857182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.857222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.857405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.857470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.857614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.857680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.857819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.857885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.858032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.858106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.858273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.858333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.858490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.858557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.858751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.858807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.858943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.859002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.859153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.859194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.859319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.859388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.859529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.859588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.859736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.859803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.860005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.860062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.860203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.860272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.860423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.860488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.860632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.860689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.860843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.860906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.861063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.861130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.861325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.861383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.861526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.861588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.861743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.861812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.861953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.862015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.862177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.862248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.862403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.862469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.862625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.862683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.400 qpair failed and we were unable to recover it. 00:25:36.400 [2024-07-15 21:47:26.862851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.400 [2024-07-15 21:47:26.862925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.863095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.863188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.863333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.863405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.863543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.863609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.863819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.863877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.864051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.864122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.864292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.864336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.864532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.864590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.864785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.864846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.865004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.865066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.865234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.865307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.865493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.865555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.865760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.865809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.866005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.866054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.866223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.866292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.866488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.866542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.866687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.866733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.866899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.866952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.867115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.867191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.867356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.867402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.867527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.867581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.867729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.867789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.867953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.868009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.868159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.868199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.868363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.868429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.868581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.868638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.868798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.868867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.869055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.869134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.869376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.869454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.869733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.869810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.870060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.870163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.870394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.870476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.870710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.870784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.870961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.871042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.871303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.871391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.871642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.871712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.871934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.872001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.872162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.872225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.872369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.872427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.872573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.872634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.872775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.872834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.401 qpair failed and we were unable to recover it. 00:25:36.401 [2024-07-15 21:47:26.873076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.401 [2024-07-15 21:47:26.873124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.873286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.873345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.873496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.873535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.873662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.873721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.873885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.873942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.874094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.874166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.874341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.874390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.874522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.874582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.874733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.874792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.874936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.874976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.875214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.875262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.875434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.875492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.875702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.875765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.875917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.875973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.876120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.876195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.876354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.876415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.876618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.876684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.876824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.876882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.877062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.877111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.877270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.877330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.877499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.877542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.877679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.877741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.877894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.877953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.878093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.878137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.878283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.878328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.878467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.878525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.878676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.878734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.878914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.878984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.879134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.879207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.879366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.879424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.879571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.879630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.879778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.879837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.879985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.880045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.880204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.880265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.880445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.880512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.880662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.880717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.880842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.880902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.881076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.881123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.881290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.881348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.881484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.881543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.881707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.881755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.402 qpair failed and we were unable to recover it. 00:25:36.402 [2024-07-15 21:47:26.881889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.402 [2024-07-15 21:47:26.881946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.882095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.882176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.882351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.882415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.882558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.882624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.882770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.882830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.882971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.883030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.883182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.883225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.883368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.883411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.883552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.883588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.883767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.883827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.883984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.884046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.884238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.884306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.884469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.884528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.884666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.884726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.884873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.884933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.885082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.885166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.885338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.885400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.885583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.885643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.885784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.885828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.885961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.886029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.886168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.886212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.886356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.886423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.886605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.886665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.886806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.886871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.887034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.887078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.887225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.887289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.887519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.887584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.887740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.887801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.887935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.887994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.888133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.888202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.888369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.888431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.888585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.888644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.888807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.888868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.889019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.889078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.889293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.889359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.403 qpair failed and we were unable to recover it. 00:25:36.403 [2024-07-15 21:47:26.889555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.403 [2024-07-15 21:47:26.889623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.889757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.889801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.890014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.890072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.890244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.890303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.890439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.890497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.890649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.890724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.890873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.890943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.891108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.891190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.891330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.891405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.891584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.891642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.891788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.891851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.892062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.892120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.892286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.892358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.892521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.892581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.892726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.892769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.892969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.893027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.893184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.893229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.893350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.893410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.893562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.893606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.893737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.893782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.893927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.893977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.894118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.894189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.894339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.894398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.894609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.894672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.894843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.894891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.895034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.895091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.895268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.895339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.895511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.895546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.895740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.895798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.895977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.896036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.896178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.896239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.896431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.896493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.896701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.896760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.896920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.896955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.897099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.897186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.897354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.897420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.897570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.897631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.897788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.897821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.897974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.898017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.898135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.898207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.898353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.404 [2024-07-15 21:47:26.898395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.404 qpair failed and we were unable to recover it. 00:25:36.404 [2024-07-15 21:47:26.898547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.898611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.898751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.898818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.898968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.899030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.899250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.899312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.899473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.899547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.899760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.899819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.899983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.900033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.900250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.900339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.900494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.900564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.900712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.900781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.900934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.901001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.901166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.901215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.901342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.901407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.901613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.901670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.901823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.901870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.902073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.902155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.902376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.902439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.902586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.902630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.902757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.902815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.903032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.903094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.903254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.903302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.903468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.903538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.903752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.903817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.904021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.904069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.904235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.904309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.904459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.904524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.904717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.904786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.904929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.904996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.905153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.905221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.905388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.905431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.905564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.905611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.905791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.905852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.906056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.906114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.906303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.906367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.906511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.906589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.906737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.906800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.906946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.907016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.907195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.907242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.907425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.907482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.907627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.907673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.907815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.907881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.405 [2024-07-15 21:47:26.908084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.405 [2024-07-15 21:47:26.908151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.405 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.908364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.908424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.908582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.908653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.908805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.908837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.909070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.909129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.909293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.909355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.909549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.909606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.909762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.909818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.910035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.910093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.910250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.910312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.910487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.910546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.910699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.910744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.910931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.910989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.911128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.911204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.911391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.911448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.911634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.911701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.911858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.911926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.912152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.912211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.912355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.912417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.912581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.912614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.912791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.912869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.913036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.913101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.913333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.913392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.913597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.913654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.913796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.913857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.914016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.914078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.914261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.914337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.914527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.914589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.914753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.914815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.914975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.915038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.915210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.915261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.915406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.915460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.915603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.915668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.915807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.915850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.916056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.916115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.916290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.916344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.916567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.916628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.916783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.916827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.916946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.916989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.917130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.917190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.917329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.917375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.917512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.406 [2024-07-15 21:47:26.917553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.406 qpair failed and we were unable to recover it. 00:25:36.406 [2024-07-15 21:47:26.917729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.917774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.917970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.918030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.918236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.918298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.918443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.918486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.918671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.918728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.918930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.918997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.919165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.919211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.919402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.919460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.919625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.919673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.919887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.919948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.920165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.920223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.920462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.920518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.920673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.920716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.920912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.920969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.921183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.921241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.921463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.921523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.921674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.921733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.921965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.922027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.922246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.922311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.922535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.922594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.922809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.922867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.923016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.923077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.923241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.923286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.923419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.923499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.923717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.923776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.923926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.923993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.924231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.924290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.924477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.924541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.924757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.924817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.925029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.925087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.925292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.925357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.925549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.925619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.925825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.925892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.926156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.926216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.926421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.407 [2024-07-15 21:47:26.926479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.407 qpair failed and we were unable to recover it. 00:25:36.407 [2024-07-15 21:47:26.926667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.926737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.926987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.927045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.927256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.927315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.927533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.927591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.927730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.927772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.927890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.927965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.928130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.928190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.928405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.928453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.928639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.928697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.928950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.929008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.929177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.929241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.929516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.929574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.929732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.929805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.929982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.930045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.930237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.930283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.930446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.930515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.930704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.930774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.930968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.931011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.931162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.931241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.931444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.931501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.931734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.931791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.932059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.932116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.932289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.932333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.932579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.932637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.932800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.932865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.933035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.933099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.933375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.933435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.933621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.933678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.933881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.933952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.934118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.934181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.934423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.934482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.934707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.934765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.935023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.935082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.935274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.935348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.935518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.935592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.935823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.935880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.936105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.936186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.936395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.936465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.936666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.936726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.936957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.937015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.937234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.408 [2024-07-15 21:47:26.937305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.408 qpair failed and we were unable to recover it. 00:25:36.408 [2024-07-15 21:47:26.937515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.937573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.937797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.937856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.938020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.938083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.938326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.938387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.938550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.938618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.938885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.938942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.939163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.939222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.939445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.939502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.939672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.939720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.939973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.940030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.940224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.940298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.940462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.940510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.940719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.940777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.941017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.941052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.941320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.941380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.941534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.941579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.941735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.941782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.941949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.942027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.942200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.942272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.942481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.942540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.942749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.942807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.942958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.943028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.943193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.943241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.943452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.943520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.943734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.943792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.943959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.944029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.944229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.944277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.944484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.944543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.944769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.944826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.945089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.945187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.945377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.945435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.945587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.945662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.945924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.945982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.946212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.946272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.946433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.946480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.946668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.946744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.946914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.946985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.947266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.947325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.947538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.947608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.947820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.947855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.948059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.948124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.409 qpair failed and we were unable to recover it. 00:25:36.409 [2024-07-15 21:47:26.948415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.409 [2024-07-15 21:47:26.948475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.948708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.948767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.949005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.949063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.949287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.949360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.949580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.949638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.949869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.949927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.950156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.950222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.950436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.950494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.950715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.950772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.950972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.951051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.951340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.951399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.951555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.951623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.951815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.951883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.952101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.952187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.952337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.952383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.952587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.952644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.952808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.952854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.953065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.953122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.953369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.953428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.953647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.953703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.953857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.953929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.954099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.954156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.954355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.954413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.954641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.954699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.954879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.954924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.955067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.955100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.955298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.955363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.955541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.955608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.955774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.955807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.955999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.956044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.956199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.956244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.956421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.956467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.956672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.956729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.956883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.956947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.957168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.957226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.957427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.957499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.957762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.957827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.958003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.958050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.958206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.958274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.958474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.958542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.958707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.958772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.410 [2024-07-15 21:47:26.958990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.410 [2024-07-15 21:47:26.959047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.410 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.959214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.959261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.959390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.959455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.959667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.959723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.959914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.959960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.960174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.960233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.960381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.960455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.960600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.960647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.960824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.960869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.961061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.961107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.961338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.961397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.961560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.961628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.961878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.961934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.962135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.962219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.962385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.962452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.962618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.962686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.962855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.962929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.963097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.963151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.963358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.963415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.963606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.963666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.963815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.963882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.964058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.964118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.964313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.964401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.964606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.964663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.964832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.964892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.965109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.965194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.965342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.965408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.965576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.965643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.965832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.965901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.966053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.966123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.966311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.966345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.966545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.966602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.966828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.966886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.967096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.967183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.967333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.967365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.967508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.967575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.967804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.967870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.968038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.968084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.968316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.968376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.968548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.968597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.968812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.968870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.411 [2024-07-15 21:47:26.969039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.411 [2024-07-15 21:47:26.969107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.411 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.969344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.969406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.969580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.969647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.969852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.969912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.970159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.970222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.970401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.970448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.970655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.970714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.970943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.971000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.971220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.971296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.971453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.971529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.971741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.971799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.971990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.972057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.972289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.972348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.972584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.972641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.972808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.972856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.972998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.973069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.973313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.973373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.973592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.973652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.973871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.973928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.974161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.974225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.974400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.974471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.974684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.974741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.974956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.975014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.975191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.975257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.975501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.975559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.975812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.975876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.976074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.976153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.976402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.976461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.976694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.976767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.977002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.412 [2024-07-15 21:47:26.977059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.412 qpair failed and we were unable to recover it. 00:25:36.412 [2024-07-15 21:47:26.977300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.977359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.977570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.977627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.977823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.977896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.978121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.978214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.978445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.978507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.978762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.978822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.979060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.979119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.979294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.979361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.979592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.979650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.979846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.979906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.980164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.980230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.980502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.980561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.980783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.980842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.981068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.981128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.981391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.981451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.981690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.981747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.981923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.981981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.982213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.982271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.982487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.982546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.982734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.982793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.983030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.983090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.983337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.983397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.983659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.983718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.983986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.984045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.984224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.984281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.984520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.984580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.984795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.984854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.985124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.985195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.985409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.985468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.985698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.985756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.985982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.986041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.986329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.986389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.986618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.986679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.986910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.986970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.987181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.987239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.987459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.987518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.987701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.987761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.987981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.988040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.988282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.988343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.988505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.988565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.988834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.413 [2024-07-15 21:47:26.988893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.413 qpair failed and we were unable to recover it. 00:25:36.413 [2024-07-15 21:47:26.989186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.989247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.989420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.989479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.989710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.989770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.990047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.990106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.990356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.990418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.990591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.990668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.990905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.990967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.991236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.991298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.991515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.991574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.991785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.991844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.992003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.992072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.992298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.992358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.992590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.992652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.992858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.992917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.993185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.993256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.993477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.993539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.993772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.993832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.994048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.994107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.994308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.994371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.994616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.994677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.994868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.994928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.995165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.995227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.995493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.995552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.995782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.995844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.996120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.996199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.996424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.996483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.996766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.996826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.997009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.997081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.997286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.997365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.997601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.997668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.997898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.997960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.998189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.998256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.998429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.998509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.998796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.998857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.999072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.999131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.999372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.999432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.999663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.999723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.414 qpair failed and we were unable to recover it. 00:25:36.414 [2024-07-15 21:47:26.999894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.414 [2024-07-15 21:47:26.999963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.000191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.000253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.000521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.000580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.000753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.000817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.001044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.001105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.001324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.001384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.001565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.001626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.001859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.001919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.002153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.002214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.002480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.002518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.002756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.002816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.003038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.003100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.003352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.003415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.003690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.003750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.003975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.004036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.004258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.004320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.004543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.004603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.004881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.004939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.005164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.005225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.005456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.005517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.005694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.005757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.005991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.006053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.006273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.006344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.006539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.006602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.006874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.006934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.007171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.007235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.007465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.007524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.007728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.007787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.008066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.008126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.008368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.008428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.008708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.008767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.009002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.009061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.009262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.009314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.009586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.009645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.009882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.009917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.010167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.010228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.010454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.010516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.010747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.010807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.011037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.011096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.415 [2024-07-15 21:47:27.011283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.415 [2024-07-15 21:47:27.011343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.415 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.011546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.011606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.011818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.011878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.012056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.012118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.012376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.012437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.012672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.012733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.013006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.013065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.013247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.013307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.013541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.013601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.013826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.013884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.014126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.014213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.014445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.014505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.014720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.014779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.014947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.015005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.015224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.015285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.015456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.015517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.015717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.015784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.015963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.016032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.016211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.016273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.016506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.016566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.016736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.016804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.017052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.017111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.017352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.017412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.017645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.017704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.017923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.017984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.018211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.018273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.018447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.018514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.018733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.018797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.018976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.019047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.019321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.019382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.019561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.019622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.019896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.019956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.020175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.020240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.020465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.020524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.416 [2024-07-15 21:47:27.020736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.416 [2024-07-15 21:47:27.020794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.416 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.020964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.021024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.021207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.021268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.021482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.021551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.021734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.021800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.022020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.022079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.022326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.022388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.022567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.022633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.022812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.022881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.023121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.023204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.023431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.023493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.023771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.023831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.024062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.024123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.024423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.024483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.024706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.024766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.024991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.025053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.025341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.025403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.025688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.025756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.025932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.025992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.026212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.026274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.026506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.026565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.026743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.026803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.027026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.027085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.027370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.027434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.027653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.027713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.027942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.028003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.028247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.028308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.028487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.028550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.028834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.028895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.029059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.029118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.029372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.029445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.029714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.029775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.029948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.030011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.030295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.030358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.030568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.030631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.030901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.030961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.417 qpair failed and we were unable to recover it. 00:25:36.417 [2024-07-15 21:47:27.031175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.417 [2024-07-15 21:47:27.031236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.031518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.031577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.031769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.031834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.032052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.032111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.032319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.032387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.032611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.032672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.032875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.032935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.033111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.033207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.033406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.033473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.033654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.033714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.033908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.033969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.034163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.034224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.034452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.034511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.034747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.034807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.034987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.035046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.035291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.035354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.035587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.035647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.035863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.035923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.036115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.036190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.036431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.036493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.036664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.036726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.036961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.037031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.037228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.037289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.037528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.037589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.037772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.037832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.038051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.038113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.038319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.038384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.038595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.038655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.038883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.038944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.039165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.039226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.039452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.039511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.039672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.039731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.039956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.040016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.040183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.040244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.040463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.040522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.040768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.040827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.041046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.041105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.041307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.041367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.041535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.041594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.041812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.041872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.042105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.418 [2024-07-15 21:47:27.042184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.418 qpair failed and we were unable to recover it. 00:25:36.418 [2024-07-15 21:47:27.042363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.042423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.042646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.042705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.042871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.042930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.043108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.043186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.043356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.043416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.043612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.043671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.043884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.043943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.044173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.044243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.044410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.044469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.044630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.044690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.044905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.044965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.045125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.045199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.045417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.045477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.045714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.045773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.046045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.046104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.046287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.046347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.046514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.046574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.046757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.046818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.046982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.047042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.047248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.047310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.047535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.047598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.047828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.047887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.048079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.048158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.048334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.048395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.048619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.048679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.048891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.048952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.049170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.049231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.049419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.049479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.049671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.049730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.049917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.049977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.050181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.050241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.050456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.050516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.050731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.050791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.050957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.051017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.051248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.051310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.051555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.051613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.051845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.419 [2024-07-15 21:47:27.051905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.419 qpair failed and we were unable to recover it. 00:25:36.419 [2024-07-15 21:47:27.052076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.052149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.052334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.052394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.052627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.052689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.052917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.052977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.053167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.053228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.053407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.053467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.053642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.053700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.053940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.054000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.054249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.054312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.054530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.054590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.054822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.054882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.055066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.055133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.055352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.055415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.055598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.055660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.055935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.055995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.056185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.056246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.056468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.056530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.056765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.056825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.057041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.057100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.057327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.057387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.057658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.057718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.057920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.057979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.058203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.058264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.058440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.058501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.058733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.058802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.059036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.059097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.059300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.059364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.059596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.059659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.059837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.059896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.060078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.060152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.060390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.060450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.060667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.060726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.060897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.060956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.061125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.061198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.061473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.061532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.061710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.061769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.061952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.062011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.062188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.062251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.062428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.062488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.062655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.062714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.062882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.420 [2024-07-15 21:47:27.062940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.420 qpair failed and we were unable to recover it. 00:25:36.420 [2024-07-15 21:47:27.063121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.063195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.063415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.063475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.063643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.063702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.063921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.063981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.064252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.064311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.064476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.064537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.064701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.064761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.064921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.064979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.065163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.065224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.065424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.065484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.065645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.065704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.065937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.065997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.066218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.066280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.066443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.066503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.066731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.066790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.067006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.067065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.067243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.067303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.067462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.067522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.067753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.067811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.068029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.068088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.068345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.068426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.068621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.068684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.068922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.068986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.069171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.069234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.069435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.069501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.069779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.069840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.070072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.070132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.070333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.070392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.070563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.070623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.070842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.070904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.071183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.071245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.071412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.071472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.071681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.071741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.071963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.072022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.421 [2024-07-15 21:47:27.072240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.421 [2024-07-15 21:47:27.072302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.421 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.072476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.072535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.072764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.072823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.073041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.073110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.073361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.073421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.073604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.073663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.073878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.073938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.074159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.074218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.074454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.074513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.074734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.074794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.075028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.075087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.075329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.075396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.075624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.075684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.075918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.075979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.076201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.076264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.076449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.076508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.076686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.076746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.077027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.077087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.077352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.077428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.077666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.077730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.077972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.078032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.078251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.078313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.078540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.078603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.078774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.078834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.079104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.079182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.079404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.079463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.079690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.079752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.079975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.080033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.080266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.080327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.080508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.080572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.080756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.080828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.081098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.081176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.081390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.081449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.081666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.081726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.081995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.082053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.082345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.082407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.082603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.082663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.082842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.082904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.083118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.083197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.083428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.083490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.083728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.083788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.422 [2024-07-15 21:47:27.084008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.422 [2024-07-15 21:47:27.084068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.422 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.084298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.084359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.084584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.084646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.084878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.084938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.085112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.085191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.085464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.085524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.085761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.085826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.086050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.086111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.086353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.086414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.086636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.086696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.086916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.086976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.087210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.087271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.087488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.087547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.087713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.087772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.088009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.088070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.088308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.088369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.088586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.088657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.088869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.088927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.089197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.089258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.089475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.089535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.089699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.089759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.089977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.090036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.090280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.090340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.090554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.090614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.090813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.090872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.091091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.091162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.091398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.091457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.091726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.091785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.092048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.092106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.092391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.092450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.092676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.092735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.092947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.093006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.093273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.093334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.093564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.093626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.093855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.093915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.094129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.094201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.094418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.094477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.094678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.094737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.095022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.095080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.095294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.095354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.095577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.095636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.423 [2024-07-15 21:47:27.095807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.423 [2024-07-15 21:47:27.095867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.423 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.096082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.096150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.096434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.096495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.096710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.096769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.096991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.097050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.097281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.097341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.097557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.097615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.097829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.097889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.098112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.098189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.098410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.098469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.098737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.098796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.099040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.099099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.099284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.099343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.099527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.099586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.099761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.099823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.100039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.100108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.100362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.100422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.100637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.100697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.100915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.100975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.101219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.101285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.101558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.101618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.101840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.101900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.102124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.102209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.102427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.102487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.102653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.102713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.102980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.103039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.103289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.103352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.103523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.103582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.103797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.103857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.104025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.104085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.104298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.104371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.104578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.104639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.104862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.104922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.105158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.105218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.105440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.105499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.105732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.105794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.105961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.106021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.106209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.106270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.106505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.106567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.106751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.106814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.107001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.107065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.424 [2024-07-15 21:47:27.107335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.424 [2024-07-15 21:47:27.107395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.424 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.107580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.107644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.107831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.107897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.108124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.108195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.108404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.108466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.108660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.108720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.108992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.109051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.109341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.109402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.109632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.109693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.109875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.109934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.110116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.110191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.110426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.110486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.110669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.110728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.110892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.110951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.111114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.111187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.111377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.111439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.111613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.111671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.111890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.111950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.112132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.112206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.112372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.112433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.112599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.112659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.112826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.112885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.113096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.113171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.113358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.113417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.113633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.113692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.113884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.113946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.114119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.114200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.114411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.114472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.114670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.114737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.114918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.114979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.115206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.115266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.115500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.115561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.115750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.115805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.115991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.116047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.116288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.425 [2024-07-15 21:47:27.116348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.425 qpair failed and we were unable to recover it. 00:25:36.425 [2024-07-15 21:47:27.116529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.116586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.116820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.116879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.117061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.117120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.117316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.117376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.117609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.117670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.117855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.117918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.118164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.118236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.118473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.118534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.118722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.118783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.118961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.119021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.119214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.119277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.119511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.119572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.119751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.119813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.119997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.120060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.120304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.120364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.120546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.120605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.120782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.120841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.121025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.121084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.121330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.121394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.121576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.121636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.121817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.121877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.122112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.122197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.122392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.122455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.122647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.122707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.122882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.122941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.123125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.123202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.123386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.123446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.123626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.123685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.123873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.123934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.124111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.124188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.124370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.124432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.124651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.124711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.124943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.125002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.125226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.125300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.125536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.125598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.125777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.125838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.126069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.126130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.126335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.126398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.126610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.126669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.426 qpair failed and we were unable to recover it. 00:25:36.426 [2024-07-15 21:47:27.126895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.426 [2024-07-15 21:47:27.126955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.127177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.127239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.127411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.127471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.127656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.127718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.127919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.127980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.128196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.128257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.128440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.128505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.128739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.128799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.128998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.129057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.129247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.129310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.129528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.129587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.129775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.129834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.130021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.130080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.130358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.130418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.130594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.130654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.130818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.130877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.131086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.131161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.131337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.131399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.131572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.131631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.131829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.131890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.132068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.132128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.132316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.132383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.132558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.132618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.132781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.132843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.133035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.133095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.133329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.133400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.133589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.133649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.133832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.133891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.134076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.134153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.134350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.134410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.134617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.134677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.134854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.134913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.135115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.135188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.135363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.135424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.135598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.135659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.135841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.135902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.136100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.136171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.136449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.136511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.136687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.136747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.136922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.136982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.427 [2024-07-15 21:47:27.137181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.427 [2024-07-15 21:47:27.137247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.427 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.137418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.137477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.137659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.137719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.137909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.137972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.138170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.138232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.138405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.138467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.138624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.138683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.138842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.138902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.139103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.139187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.139380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.139440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.139620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.139679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.139855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.139913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.140094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.140166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.140371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.140434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.140626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.140687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.140867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.140928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.141122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.141196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.141399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.141456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.141637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.141695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.141879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.141941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.142164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.142225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.142406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.142473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.142663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.142726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.142912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.142974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.143170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.143233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.143401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.143461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.143656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.143715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.143892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.143952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.144130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.144201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.144369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.144428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.144624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.144685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.144860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.144922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.145111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.145186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.145377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.145436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.145614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.145672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.145865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.145924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.146101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.146173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.146369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.146429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.146608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.146667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.146868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.146930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.147117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.428 [2024-07-15 21:47:27.147193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.428 qpair failed and we were unable to recover it. 00:25:36.428 [2024-07-15 21:47:27.147389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.147448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.147643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.147703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.147883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.147942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.148124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.148197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.148370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.148426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.148618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.148676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.148855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.148917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.149105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.149192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.149364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.149424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.149588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.149648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.149811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.149869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.150044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.150103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.150321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.150382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.150540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.150599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.150769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.150828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.151016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.151075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.151289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.151350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.151548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.151607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.151777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.151836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.152022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.152086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.152311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.152369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.152567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.152622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.152788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.152844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.153010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.153064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.153277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.153334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.153514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.153571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.153759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.153815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.153981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.154037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.154209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.154266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.154457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.154514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.154681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.154737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.154900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.154956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.155115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.155190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.155387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.155443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.155644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.155701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.155863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.155920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.429 qpair failed and we were unable to recover it. 00:25:36.429 [2024-07-15 21:47:27.156092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.429 [2024-07-15 21:47:27.156163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.156345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.156402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.156570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.156627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.156798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.156855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.157028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.157084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.157288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.157345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.157517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.157574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.157770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.157830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.157998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.158055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.158258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.158317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.158514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.158571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.158754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.158822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.158986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.159041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.159246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.159313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.159544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.159620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.159824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.159883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.160072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.160179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.160415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.160505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.160726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.160795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.160975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.161039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.161238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.161299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.161492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.161550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.161723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.161782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.161955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.162021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.162202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.162262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.162447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.162509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.162690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.162755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.162928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.162991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.163177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.163245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.163419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.163483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.163655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.163723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.163900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.163959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.164127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.164196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.164372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.164433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.164617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.164676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.164864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.164923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.165096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.165169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.430 qpair failed and we were unable to recover it. 00:25:36.430 [2024-07-15 21:47:27.165356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.430 [2024-07-15 21:47:27.165415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.431 qpair failed and we were unable to recover it. 00:25:36.431 [2024-07-15 21:47:27.165583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.431 [2024-07-15 21:47:27.165656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.431 qpair failed and we were unable to recover it. 00:25:36.431 [2024-07-15 21:47:27.165822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.431 [2024-07-15 21:47:27.165880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.431 qpair failed and we were unable to recover it. 00:25:36.431 [2024-07-15 21:47:27.166062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.431 [2024-07-15 21:47:27.166123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.431 qpair failed and we were unable to recover it. 00:25:36.431 [2024-07-15 21:47:27.166383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.718 [2024-07-15 21:47:27.166477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.718 qpair failed and we were unable to recover it. 00:25:36.718 [2024-07-15 21:47:27.166769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.718 [2024-07-15 21:47:27.166865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.718 qpair failed and we were unable to recover it. 00:25:36.718 [2024-07-15 21:47:27.167130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.718 [2024-07-15 21:47:27.167210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.718 qpair failed and we were unable to recover it. 00:25:36.718 [2024-07-15 21:47:27.167415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.718 [2024-07-15 21:47:27.167478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.718 qpair failed and we were unable to recover it. 00:25:36.718 [2024-07-15 21:47:27.167680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.718 [2024-07-15 21:47:27.167741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.718 qpair failed and we were unable to recover it. 00:25:36.718 [2024-07-15 21:47:27.167920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.718 [2024-07-15 21:47:27.167986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.718 qpair failed and we were unable to recover it. 00:25:36.718 [2024-07-15 21:47:27.168163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.718 [2024-07-15 21:47:27.168223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.718 qpair failed and we were unable to recover it. 00:25:36.718 [2024-07-15 21:47:27.168423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.718 [2024-07-15 21:47:27.168484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.718 qpair failed and we were unable to recover it. 00:25:36.718 [2024-07-15 21:47:27.168681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.718 [2024-07-15 21:47:27.168741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.718 qpair failed and we were unable to recover it. 00:25:36.718 [2024-07-15 21:47:27.168922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.718 [2024-07-15 21:47:27.168981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.718 qpair failed and we were unable to recover it. 00:25:36.718 [2024-07-15 21:47:27.169158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.718 [2024-07-15 21:47:27.169222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.718 qpair failed and we were unable to recover it. 00:25:36.718 [2024-07-15 21:47:27.169438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.718 [2024-07-15 21:47:27.169499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.718 qpair failed and we were unable to recover it. 00:25:36.718 [2024-07-15 21:47:27.169679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.718 [2024-07-15 21:47:27.169742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.718 qpair failed and we were unable to recover it. 00:25:36.718 [2024-07-15 21:47:27.169931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.718 [2024-07-15 21:47:27.169990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.718 qpair failed and we were unable to recover it. 00:25:36.718 [2024-07-15 21:47:27.170175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.718 [2024-07-15 21:47:27.170236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.718 qpair failed and we were unable to recover it. 00:25:36.718 [2024-07-15 21:47:27.170424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.718 [2024-07-15 21:47:27.170485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.718 qpair failed and we were unable to recover it. 00:25:36.718 [2024-07-15 21:47:27.170671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.718 [2024-07-15 21:47:27.170730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.718 qpair failed and we were unable to recover it. 00:25:36.718 [2024-07-15 21:47:27.170900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.718 [2024-07-15 21:47:27.170968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.171189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.171250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.171444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.171501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.171699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.171758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.171919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.171975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.172161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.172228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.172392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.172448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.172649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.172716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.172960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.173067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.173336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.173420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.173681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.173772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.173975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.174034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.174238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.174298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.174494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.174553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.174735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.174794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.175000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.175059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.175246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.175308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.175486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.175545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.175750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.175813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.176000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.176061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.176270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.176331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.176531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.176590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.176821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.176881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.177068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.177128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.177349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.177411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.177606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.177665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.177846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.177906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.178073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.178133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.178378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.178437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.178619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.178682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.178881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.178939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.179119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.179205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.179439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.179499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.179718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.719 [2024-07-15 21:47:27.179776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.719 qpair failed and we were unable to recover it. 00:25:36.719 [2024-07-15 21:47:27.179957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.180021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.180255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.180316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.180540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.180602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.180800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.180865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.181083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.181151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.181334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.181394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.181584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.181640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.181843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.181908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.182090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.182157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.182331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.182389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.182579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.182643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.182869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.182928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.183098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.183172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.183427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.183486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.183684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.183741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.183922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.183988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.184216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.184277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.184505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.184561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.184763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.184820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.185002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.185058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.185289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.185347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.185518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.185573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.185742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.185797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.185980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.186038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.186254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.186314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.186507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.186566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.186740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.186798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.186983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.187057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.187295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.187355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.187540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.187601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.187770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.187834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.188013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.720 [2024-07-15 21:47:27.188070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.720 qpair failed and we were unable to recover it. 00:25:36.720 [2024-07-15 21:47:27.188258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.188316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.188505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.188562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.188755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.188814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.189023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.189081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.189272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.189333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.189530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.189587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.189772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.189832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.190024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.190081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.190307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.190368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.190546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.190608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.190832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.190891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.191066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.191128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.191323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.191381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.191561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.191619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.191831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.191890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.192073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.192134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.192341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.192408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.192601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.192661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.192835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.192897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.193070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.193129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.193343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.193412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.193599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.193659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.193900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.193959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.194196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.194258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.194449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.194516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.194718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.194778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.194959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.195018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.195194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.195264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.195461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.195523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.195707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.195773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.195962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.196021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.196209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.196270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.196491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.721 [2024-07-15 21:47:27.196591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.721 qpair failed and we were unable to recover it. 00:25:36.721 [2024-07-15 21:47:27.196850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.196938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.197201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.197291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.197518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.197577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.197773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.197833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.198028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.198084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.198293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.198354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.198544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.198603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.198783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.198838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.199019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.199077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.199292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.199357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.199553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.199609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.199784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.199843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.200018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.200086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.200289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.200346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.200531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.200595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.200761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.200814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.200994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.201052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.201250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.201309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.201547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.201606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.201839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.201905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.202090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.202178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.202383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.202442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.202612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.202674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.202865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.202930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.203133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.203209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.203383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.203439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.203657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.203723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.203908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.203964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.204171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.204231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.204420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.204479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.204668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.204736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.204938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.204997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.205183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.205245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.722 qpair failed and we were unable to recover it. 00:25:36.722 [2024-07-15 21:47:27.205437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.722 [2024-07-15 21:47:27.205496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.723 qpair failed and we were unable to recover it. 00:25:36.723 [2024-07-15 21:47:27.205689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.723 [2024-07-15 21:47:27.205748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.723 qpair failed and we were unable to recover it. 00:25:36.723 [2024-07-15 21:47:27.205933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.723 [2024-07-15 21:47:27.205998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.723 qpair failed and we were unable to recover it. 00:25:36.723 [2024-07-15 21:47:27.206187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.723 [2024-07-15 21:47:27.206250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.723 qpair failed and we were unable to recover it. 00:25:36.723 [2024-07-15 21:47:27.206425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.723 [2024-07-15 21:47:27.206485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.723 qpair failed and we were unable to recover it. 00:25:36.723 [2024-07-15 21:47:27.206679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.723 [2024-07-15 21:47:27.206738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.723 qpair failed and we were unable to recover it. 00:25:36.723 [2024-07-15 21:47:27.206941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.723 [2024-07-15 21:47:27.207000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.723 qpair failed and we were unable to recover it. 00:25:36.723 [2024-07-15 21:47:27.207236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.723 [2024-07-15 21:47:27.207297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.723 qpair failed and we were unable to recover it. 00:25:36.723 [2024-07-15 21:47:27.207481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.723 [2024-07-15 21:47:27.207552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.723 qpair failed and we were unable to recover it. 00:25:36.723 [2024-07-15 21:47:27.207786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.723 [2024-07-15 21:47:27.207847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.723 qpair failed and we were unable to recover it. 00:25:36.723 [2024-07-15 21:47:27.208084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.723 [2024-07-15 21:47:27.208167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.723 qpair failed and we were unable to recover it. 00:25:36.723 [2024-07-15 21:47:27.208408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.723 [2024-07-15 21:47:27.208468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.723 qpair failed and we were unable to recover it. 00:25:36.723 [2024-07-15 21:47:27.208652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.723 [2024-07-15 21:47:27.208718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.723 qpair failed and we were unable to recover it. 00:25:36.723 [2024-07-15 21:47:27.208898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.723 [2024-07-15 21:47:27.208953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.723 qpair failed and we were unable to recover it. 00:25:36.723 [2024-07-15 21:47:27.209178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.723 [2024-07-15 21:47:27.209247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.723 qpair failed and we were unable to recover it. 00:25:36.723 [2024-07-15 21:47:27.209432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.723 [2024-07-15 21:47:27.209492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.723 qpair failed and we were unable to recover it. 00:25:36.723 [2024-07-15 21:47:27.209733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.723 [2024-07-15 21:47:27.209792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.723 qpair failed and we were unable to recover it. 00:25:36.723 [2024-07-15 21:47:27.209988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.723 [2024-07-15 21:47:27.210046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.723 qpair failed and we were unable to recover it. 00:25:36.723 [2024-07-15 21:47:27.210256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.723 [2024-07-15 21:47:27.210322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.723 qpair failed and we were unable to recover it. 00:25:36.723 [2024-07-15 21:47:27.210534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.723 [2024-07-15 21:47:27.210594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.723 qpair failed and we were unable to recover it. 00:25:36.723 [2024-07-15 21:47:27.210783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.723 [2024-07-15 21:47:27.210846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.723 qpair failed and we were unable to recover it. 00:25:36.723 [2024-07-15 21:47:27.211023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.723 [2024-07-15 21:47:27.211082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.723 qpair failed and we were unable to recover it. 00:25:36.723 [2024-07-15 21:47:27.211279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.723 [2024-07-15 21:47:27.211340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.723 qpair failed and we were unable to recover it. 00:25:36.723 [2024-07-15 21:47:27.211518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.723 [2024-07-15 21:47:27.211578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.723 qpair failed and we were unable to recover it. 00:25:36.723 [2024-07-15 21:47:27.211803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.723 [2024-07-15 21:47:27.211876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.212039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.212104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.212344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.212402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.212580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.212640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.212826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.212894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.213085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.213172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.213375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.213438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.213632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.213695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.213883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.213942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.214126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.214212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.214441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.214503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.214736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.214795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.214955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.215020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.215203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.215263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.215459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.215518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.215738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.215804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.216018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.216095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.216352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.216417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.216633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.216696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.216863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.216922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.217157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.217216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.217400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.217461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.217635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.217690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.217859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.217915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.218077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.218136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.218341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.218402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.218643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.218702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.218931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.219006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.219203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.219277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.219438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.219494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.219666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.219727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.219886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.219945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.220102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.724 [2024-07-15 21:47:27.220170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.724 qpair failed and we were unable to recover it. 00:25:36.724 [2024-07-15 21:47:27.220343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.220401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.220578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.220631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.220818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.220875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.221039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.221105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.221337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.221394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.221583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.221647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.221880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.221939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.222117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.222204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.222454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.222516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.222698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.222763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.222925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.222981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.223165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.223225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.223388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.223448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.223622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.223686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.223919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.223980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.224199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.224263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.224453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.224522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.224762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.224823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.224998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.225086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.225300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.225361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.225528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.225589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.225826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.225925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.226125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.226202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.226402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.226462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.226634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.226696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.226884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.226949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.227127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.227214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.227392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.227422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.227626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.227686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.227860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.227918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.228089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.228168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.228356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.228416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.228586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.725 [2024-07-15 21:47:27.228617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.725 qpair failed and we were unable to recover it. 00:25:36.725 [2024-07-15 21:47:27.228792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.228855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.229024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.229083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.229288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.229347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.229519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.229583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.229760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.229814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.229995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.230055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.230247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.230296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.230467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.230499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.230664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.230724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.230907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.230966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.231151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.231212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.231394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.231454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.231642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.231691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.231860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.231919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.232112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.232200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.232372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.232441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.232631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.232663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.232831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.232890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.233074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.233136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.233348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.233399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.233601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.233661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.233819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.233886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.234094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.234179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.234356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.234417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.234591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.234655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.234860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.234922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.235103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.235185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.235383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.235424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.235630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.235687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.235889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.235948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.236117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.236216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.236418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.726 [2024-07-15 21:47:27.236483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.726 qpair failed and we were unable to recover it. 00:25:36.726 [2024-07-15 21:47:27.236662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.236728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.236923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.236986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.237188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.237219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.237434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.237464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.237640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.237700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.237869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.237925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.238112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.238196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.238371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.238437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.238632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.238663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.238859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.238918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.239107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.239199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.239391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.239453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.239633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.239695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.239884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.239949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.240177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.240240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.240417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.240477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.240668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.240731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.240899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.240958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.241162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.241231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.241417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.241478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.241658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.241717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.241879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.241947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.242161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.242221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.242418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.242450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.242645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.242712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.242908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.242970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.243136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.243221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.243381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.243448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.243640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.243699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.243862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.243929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.244094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.244181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.244350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.244410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.244588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.244646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.244835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.244894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.245061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.245129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.245349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.245413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.245598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.727 [2024-07-15 21:47:27.245663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.727 qpair failed and we were unable to recover it. 00:25:36.727 [2024-07-15 21:47:27.245836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.245905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.246100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.246178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.246362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.246423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.246605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.246664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.246832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.246891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.247105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.247181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.247357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.247416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.247585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.247648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.247831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.247901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.248076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.248173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.248346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.248408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.248585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.248644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.248817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.248878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.249059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.249127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.249397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.249429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.249635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.249702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.249870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.249931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.250122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.250201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.250425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.250485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.250674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.250739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.250902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.250954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.251154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.251200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.251382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.251441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.251656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.251685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.251881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.251942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.252120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.252203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.252382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.252439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.252598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.252665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.252869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.252928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.253110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.253197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.253375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.253437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.253616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.253685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.253861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.253922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.728 qpair failed and we were unable to recover it. 00:25:36.728 [2024-07-15 21:47:27.254102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.728 [2024-07-15 21:47:27.254174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.254340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.254399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.254565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.254621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.254846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.254904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.255079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.255161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.255362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.255425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.255648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.255681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.255888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.255949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.256135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.256220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.256397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.256462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.256655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.256724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.256903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.256972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.257169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.257231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.257416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.257479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.257660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.257725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.257905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.257973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.258166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.258223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.258398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.258471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.258741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.258817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.259053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.259113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.259318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.259386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.259575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.259651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.259817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.259876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.260074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.260153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.260333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.260394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.260575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.260637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.260823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.260886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.261120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.261205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.261416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.729 [2024-07-15 21:47:27.261480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.729 qpair failed and we were unable to recover it. 00:25:36.729 [2024-07-15 21:47:27.261672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.261741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.261929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.261990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.262198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.262259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.262443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.262503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.262726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.262786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.262960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.263020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.263210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.263271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.263458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.263524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.263727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.263790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.263971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.264039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.264230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.264291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.264518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.264581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.264743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.264805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.264991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.265060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.265305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.265383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.265557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.265619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.265807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.265895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.266176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.266238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.266394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.266451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.266625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.266693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.266883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.266942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.267197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.267289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.267557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.267638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.267902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.267966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.268191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.268253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.268434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.268492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.268722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.268797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.268979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.269045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.269249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.269309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.269472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.269545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.269715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.269771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.270030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.270092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.270382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.270452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.270648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.270728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.270983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.271073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.730 qpair failed and we were unable to recover it. 00:25:36.730 [2024-07-15 21:47:27.271358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.730 [2024-07-15 21:47:27.271421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.271640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.271709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.271871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.271939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.272108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.272192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.272367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.272464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.272631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.272694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.272928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.272999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.273227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.273290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.273483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.273548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.273714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.273787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.274026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.274108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.274322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.274416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.274592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.274657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.274916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.274976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.275299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.275359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.275600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.275684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.275874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.275939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.276151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.276212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.276419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.276492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.276663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.276723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.276904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.276959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.277198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.277266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.277447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.277505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.277698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.277760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.277939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.277997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.278211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.278282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.278450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.278507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.278679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.278742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.278923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.278984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.279157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.279226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.279418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.279479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.279666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.279728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.279907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.279968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.280164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.280225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.280426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.280492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.280659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.280720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.280906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.731 [2024-07-15 21:47:27.280976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.731 qpair failed and we were unable to recover it. 00:25:36.731 [2024-07-15 21:47:27.281179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.281252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.281439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.281505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.281693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.281753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.281969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.282032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.282211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.282275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.282470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.282531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.282718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.282788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.282969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.283037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.283220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.283286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.283478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.283544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.283721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.283781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.283966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.284035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.284228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.284298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.284465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.284525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.284714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.284786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.284966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.285031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.285244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.285309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.285497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.285565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.285718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.285760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.285896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.285938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.286079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.286122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.286299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.286368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.286542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.286609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.286770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.286836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.287015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.287057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.287251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.287313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.287489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.287548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.287708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.287774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.287992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.288055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.288252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.288312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.288488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.288551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.288705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.288750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.288871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.732 [2024-07-15 21:47:27.288914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.732 qpair failed and we were unable to recover it. 00:25:36.732 [2024-07-15 21:47:27.289069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.289167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.289309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.289351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.289479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.289522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.289657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.289698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.289870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.289939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.290131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.290218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.290422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.290482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.290693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.290755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.290925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.290990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.291188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.291257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.291433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.291493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.291682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.291744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.291938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.291994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.292160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.292221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.292394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.292455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.292648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.292705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.292936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.293017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.293232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.293297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.293464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.293526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.293700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.293768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.293944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.294004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.294243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.294342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.294585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.294652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.294831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.294894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.295059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.295119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.295485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.295546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.295741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.295798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.295967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.296034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.296243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.296309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.296497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.296559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.296736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.296799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.296973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.297030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.297225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.297291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.297481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.297541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.297729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.297793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.733 qpair failed and we were unable to recover it. 00:25:36.733 [2024-07-15 21:47:27.297998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.733 [2024-07-15 21:47:27.298057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.298278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.298348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.298516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.298576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.298766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.298827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.299003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.299066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.299253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.299316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.299483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.299548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.299714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.299776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.299956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.300015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.300255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.300315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.300494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.300556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.300761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.300820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.301006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.301072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.301279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.301345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.301543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.301602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.301760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.301828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.302077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.302137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.302364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.302423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.302604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.302671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.302842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.302910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.303088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.303170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.303369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.303430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.303625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.303687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.303867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.303927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.304172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.304243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.304414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.304483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.304691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.304762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.304960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.305021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.734 [2024-07-15 21:47:27.305188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.734 [2024-07-15 21:47:27.305248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.734 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.305474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.305534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.305705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.305766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.305938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.305999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.306165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.306230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.306429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.306489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.306661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.306720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.306881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.306948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.307112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.307192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.307378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.307440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.307616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.307675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.307896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.307955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.308159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.308225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.308460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.308517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.308758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.308820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.308983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.309040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.309238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.309299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.309516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.309576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.309738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.309795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.309954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.310010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.310193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.310262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.310449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.310506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.310681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.310746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.310930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.310990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.311165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.311226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.311474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.311534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.311692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.311758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.311945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.312008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.312196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.312260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.312442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.312501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.312677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.312736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.312921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.312990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.313164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.313227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.313413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.313483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.313646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.313706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.313879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.313950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.735 qpair failed and we were unable to recover it. 00:25:36.735 [2024-07-15 21:47:27.314157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.735 [2024-07-15 21:47:27.314219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.314416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.314484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.314702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.314772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.314975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.315035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.315204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.315274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.315496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.315556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.315758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.315814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.315974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.316033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.316211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.316272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.316495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.316562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.316750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.316807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.316985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.317056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.317246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.317314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.317544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.317603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.317816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.317877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.318034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.318103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.318299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.318363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.318559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.318622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.318807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.318866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.319039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.319098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.319350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.319409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.319633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.319693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.319857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.319926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.320122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.320206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.320397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.320465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.320634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.320693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.320906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.320966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.321197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.321258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.321489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.321545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.321732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.321799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.321976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.322040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.322298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.322360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.322557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.322618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.736 [2024-07-15 21:47:27.322790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.736 [2024-07-15 21:47:27.322857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.736 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.323052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.323112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.323315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.323376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.323613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.323672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.323841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.323905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.324089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.324170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.324364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.324424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.324592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.324655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.324824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.324880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.325069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.325171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.325364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.325426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.325603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.325662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.325827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.325883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.326098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.326178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.326401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.326467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.326694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.326752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.326935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.327002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.327244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.327303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.327493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.327555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.327771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.327829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.328043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.328103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.328348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.328409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.328567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.328623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.328815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.328875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.329032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.329094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.329290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.329361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.329568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.329628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.329865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.329924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.330086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.330168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.330399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.330458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.330634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.330693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.330863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.737 [2024-07-15 21:47:27.330924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.737 qpair failed and we were unable to recover it. 00:25:36.737 [2024-07-15 21:47:27.331153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.331218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.331443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.331503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.331686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.331747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.331921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.331980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.332285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.332400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.332539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a4170 (9): Bad file descriptor 00:25:36.738 [2024-07-15 21:47:27.332801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.332901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.333115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.333201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.333454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.333514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.333726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.333784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.334003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.334069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.334261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.334323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.334558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.334621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.334859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.334920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.335108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.335188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.335363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.335421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.335597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.335656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.335866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.335924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.336093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.336166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.336402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.336460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.336659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.336726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.336915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.336978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.337222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.337289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.337495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.337554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.337794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.337855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.338029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.338089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.338277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.338337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.338511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.338569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.338800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.338859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.339083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.339161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.339340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.339401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.339641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.339716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.339914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.339975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.340170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.340267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.340457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.340519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.738 qpair failed and we were unable to recover it. 00:25:36.738 [2024-07-15 21:47:27.340699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.738 [2024-07-15 21:47:27.340764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.340969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.341029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.341251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.341312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.341487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.341547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.341707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.341773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.341937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.341997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.342175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.342244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.342432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.342492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.342692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.342751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.342920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.343002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.343188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.343257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.343459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.343522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.343686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.343747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.343951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.344012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.344212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.344274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.344483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.344545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.344714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.344774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.344959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.345022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.345216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.345276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.345462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.345528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.345703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.345763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.345950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.346009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.346171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.346242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.346414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.346474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.346669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.346728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.346915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.346974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.347136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.347217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.739 [2024-07-15 21:47:27.347416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.739 [2024-07-15 21:47:27.347482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.739 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.347661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.347720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.347891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.347950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.348123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.348207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.348376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.348438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.348620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.348684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.348880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.348942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.349122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.349212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.349377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.349441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.349652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.349720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.349921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.349983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.350164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.350225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.350420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.350483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.350670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.350731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.350912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.350972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.351153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.351215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.351401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.351472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.351674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.351734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.351915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.351984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.352160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.352222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.352432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.352492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.352666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.352727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.352920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.352980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.353200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.353264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.353503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.353564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.353742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.353811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.353988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.354058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.354251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.354311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.354514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.354577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.354769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.354829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.355006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.355074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.355255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.355316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.355507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.355566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.355751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.355815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.740 [2024-07-15 21:47:27.356005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.740 [2024-07-15 21:47:27.356072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.740 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.356268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.356328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.356517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.356584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.356766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.356835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.357000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.357061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.357267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.357332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.357556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.357617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.357801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.357861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.358033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.358092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.358285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.358345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.358504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.358570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.358778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.358838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.359014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.359078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.359270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.359330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.359529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.359590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.359769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.359840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.360002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.360069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.360331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.360395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.360575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.360635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.360809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.360873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.361053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.361114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.361302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.361362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.361549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.361610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.361774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.361834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.362006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.362066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.362275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.362343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.362523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.362583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.362749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.362809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.362977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.363037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.363288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.363350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.363534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.363600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.363793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.363852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.364021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.364087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.364263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.364330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.364600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.741 [2024-07-15 21:47:27.364661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.741 qpair failed and we were unable to recover it. 00:25:36.741 [2024-07-15 21:47:27.364824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.364884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.365067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.365136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.365329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.365396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.365602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.365661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.365847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.365909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.366073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.366152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.366382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.366443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.366669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.366730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.366904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.366965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.367155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.367216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.367443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.367503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.367731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.367790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.367965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.368021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.368249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.368316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.368483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.368539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.368753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.368816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.368980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.369044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.369239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.369303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.369524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.369584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.369809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.369868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.370106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.370188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.370418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.370474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.370687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.370747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.370924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.370982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.371207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.371278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.371435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.371491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.371705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.371764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.371993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.372050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.372239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.372300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.372495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.372551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.372710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.372769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.373004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.373065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.373251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.373311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.373481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.373540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.742 qpair failed and we were unable to recover it. 00:25:36.742 [2024-07-15 21:47:27.373783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.742 [2024-07-15 21:47:27.373842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.374022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.374080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.374280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.374339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.374499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.374556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.374770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.374829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.375030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.375091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.375290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.375350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.375549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.375609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.375785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.375843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.376052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.376111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.376356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.376418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.376595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.376653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.376890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.376948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.377129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.377214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.377458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.377516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.377726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.377785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.377953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.378012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.378207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.378269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.378450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.378509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.378717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.378776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.379001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.379063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.379264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.379324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.379481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.379539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.379739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.379800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.379984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.380042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.380245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.380307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.380463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.380532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.380712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.380770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.380965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.381026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.381209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.381269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.381423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.381482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.381695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.381754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.381972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.382031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.382191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.382252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.382457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.382528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.382770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.382829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.383057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.383117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.743 [2024-07-15 21:47:27.383293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.743 [2024-07-15 21:47:27.383353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.743 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.383589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.383648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.383835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.383899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.384156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.384216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.384449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.384508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.384717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.384776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.385004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.385063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.385341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.385406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.385595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.385664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.385910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.385973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.386166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.386231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.386472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.386532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.386766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.386829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.387061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.387122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.387352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.387414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.387577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.387637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.387862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.387932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.388132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.388211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.388439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.388499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.388749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.388811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.388980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.389039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.389296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.389357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.389602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.389660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.389835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.389897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.390078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.390157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.390394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.390456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.744 [2024-07-15 21:47:27.390700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.744 [2024-07-15 21:47:27.390768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.744 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.391040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.391098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.391330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.391390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.391606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.391664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.391852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.391913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.392100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.392186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.392369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.392430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.392593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.392652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.392818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.392877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.393041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.393099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.393313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.393373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.393567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.393628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.393817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.393876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.394128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.394202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.394370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.394429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.394704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.394764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.394952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.395012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.395237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.395299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.395464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.395524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.395760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.395818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.395994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.396053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.396259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.396320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.396605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.396665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.396861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.396922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.397092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.397164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.397366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.397427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.397628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.397686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.397926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.397984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.398164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.398224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.398403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.398462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.398647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.398718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.398921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.398980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.399174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.399234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.399423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.399482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.399718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.745 [2024-07-15 21:47:27.399778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.745 qpair failed and we were unable to recover it. 00:25:36.745 [2024-07-15 21:47:27.399962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.400025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.400233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.400294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.400482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.400543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.400710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.400769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.400987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.401045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.401252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.401312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.401513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.401571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.401759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.401818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.401993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.402053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.402357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.402417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.402593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.402652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.402882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.402943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.403170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.403230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.403423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.403484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.403686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.403744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.403949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.404007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.404208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.404270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.404456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.404516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.404720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.404780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.404981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.405041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.405227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.405288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.405483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.405542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.405748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.405806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.405983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.406044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.406238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.406299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.406483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.406544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.406734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.406793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.406982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.407041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.407249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.407311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.407507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.407568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.407799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.407857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.408088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.408181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.408373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.408433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.408628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.408688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.408880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.408939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.409180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.409255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.746 [2024-07-15 21:47:27.409448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.746 [2024-07-15 21:47:27.409507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.746 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.409681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.409740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.409897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.409955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.410187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.410251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.410424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.410485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.410655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.410715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.410926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.410988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.411230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.411291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.411490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.411548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.411757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.411817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.412017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.412078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.412262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.412322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.412522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.412582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.412827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.412888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.413070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.413131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.413384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.413444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.413639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.413700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.413909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.413967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.414179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.414239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.414463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.414522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.414718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.414775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.414987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.415045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.415283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.415342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.415589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.415647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.415815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.415874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.416079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.416163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.416393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.416454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.416657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.416716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.416904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.416963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.417179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.417238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.417414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.417476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.417664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.417724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.417957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.418015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.418190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.418250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.418454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.418516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.747 qpair failed and we were unable to recover it. 00:25:36.747 [2024-07-15 21:47:27.418725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.747 [2024-07-15 21:47:27.418784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.418963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.419022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.419303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.419364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.419567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.419626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.419828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.419898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.420125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.420194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.420372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.420434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.420646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.420706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.420914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.420972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.421164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.421224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.421405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.421464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.421710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.421769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.421954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.422012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.422249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.422311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.422519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.422578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.422817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.422879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.423067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.423128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.423381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.423440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.423619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.423677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.423924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.423983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.424200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.424263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.424448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.424509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.424731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.424791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.425034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.425092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.425323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.425381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.425601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.425660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.425877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.425937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.426100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.426133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.426255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.426288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.426385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.426418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.426525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.426558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.426682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.426715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.426834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.426867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.426968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.427001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.427104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.427136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.427273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.427305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.427403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.427435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.748 qpair failed and we were unable to recover it. 00:25:36.748 [2024-07-15 21:47:27.427537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.748 [2024-07-15 21:47:27.427569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.427689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.427721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.427838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.427870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.427973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.428006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.428101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.428134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.428261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.428294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.428411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.428443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.428561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.428599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.428700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.428732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.428838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.428871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.428972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.429005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.429110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.429154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.429300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.429333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.429472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.429504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.429601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.429636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.429746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.429778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.429882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.429914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.430022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.430054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.430161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.430195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.430300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.430332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.430474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.430506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.430629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.430663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.430754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.430786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.430910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.430970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.431110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.431160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.431273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.749 [2024-07-15 21:47:27.431307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.749 qpair failed and we were unable to recover it. 00:25:36.749 [2024-07-15 21:47:27.431419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.431453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.431564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.431597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.431703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.431738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.431857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.431890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.432004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.432037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.432163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.432197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.432304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.432338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.432441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.432475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.432594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.432627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.432734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.432767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.432875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.432919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.433051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.433098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.433238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.433275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.433411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.433445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.433546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.433579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.433683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.433716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.433826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.433860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.433971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.434003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.434102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.434136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.434256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.434289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.434387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.434419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.434512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.434549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.434653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.434686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.434789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.434822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.434928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.434963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.435070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.435104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.435205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.435238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.435346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.435379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.435492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.435524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.435622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.435694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.435878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.435937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.436108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.436179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.436349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.436408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.436582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.436641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.436812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.436870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.750 [2024-07-15 21:47:27.437109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.750 [2024-07-15 21:47:27.437198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.750 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.437376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.437435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.437615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.437676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.437860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.437922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.438096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.438169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.438350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.438411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.438589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.438648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.438810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.438870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.439045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.439105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.439346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.439400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.439561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.439620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.439785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.439843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.440018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.440077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.440283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.440344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.440524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.440585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.440760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.440818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.440996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.441057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.441249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.441316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.441527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.441559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.441734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.441792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.441949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.442009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.442202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.442264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.442438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.442498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.442704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.442736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.442919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.442977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.443163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.443224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.443407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.443479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.443640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.443701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.443898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.443957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.444120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.444237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.444424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.444477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.444683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.444742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.444898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.444957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.445135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.445207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.445391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.445451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.445626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.445688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.445878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.445938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.446118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.446192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.446360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.446425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.446572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.751 [2024-07-15 21:47:27.446631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.751 qpair failed and we were unable to recover it. 00:25:36.751 [2024-07-15 21:47:27.446823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.446882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.447104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.447173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.447363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.447425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.447600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.447659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.447872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.447905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.448085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.448157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.448379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.448437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.448609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.448671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.448845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.448904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.449167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.449228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.449438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.449498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.449668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.449726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.449909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.449968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.450166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.450228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.450400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.450461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.450646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.450705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.450879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.450946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.451164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.451223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.451410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.451469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.451642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.451702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.451935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.451993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.452176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.452236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.452429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.452488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.452665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.452726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.452915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.452977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.453167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.453238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.453426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.453500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.453729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.453791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.453985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.454052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.454298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.454360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.454524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.454583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.454751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.454819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.455012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.455094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.455321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.455385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.455551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.752 [2024-07-15 21:47:27.455611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.752 qpair failed and we were unable to recover it. 00:25:36.752 [2024-07-15 21:47:27.455787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.455847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.456018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.456078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.456266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.456328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.456509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.456570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.456746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.456804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.456980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.457039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.457230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.457291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.457458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.457518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.457687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.457746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.457916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.457976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.458171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.458234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.458417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.458478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.458668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.458731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.458901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.458955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.459122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.459196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.459366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.459425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.459598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.459645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.459821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.459882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.460096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.460195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.460372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.460431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.460713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.460771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.460935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.460984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.461165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.461230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.461446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.461479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.461701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.461755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.461919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.461978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.462192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.462262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.462451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.462508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.462780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.462839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.463049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.463110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.463354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.463400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.463572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.463633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.463880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.463949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.464169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.464227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.464438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.464497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.753 qpair failed and we were unable to recover it. 00:25:36.753 [2024-07-15 21:47:27.464731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.753 [2024-07-15 21:47:27.464785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.465012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.465082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.465272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.465333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.465572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.465630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.465802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.465863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.466151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.466212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.466421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.466454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.466625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.466673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.466917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.466977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.467183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.467244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.467485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.467553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.467760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.467819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.468058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.468117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.468386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.468446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.468618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.468676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.468889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.468948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.469155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.469215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.469402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.469462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.469670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.469729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.469919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.469982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.470223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.470283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.470560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.470618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.470868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.470933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.471168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.471229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.471420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.471479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.471725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.471786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.472005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.472065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.472290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.472352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.472534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.472595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.472835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.472894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.473109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.473181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.754 qpair failed and we were unable to recover it. 00:25:36.754 [2024-07-15 21:47:27.473394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.754 [2024-07-15 21:47:27.473458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.473649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.473708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.473908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.473967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.474179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.474239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.474420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.474478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.474649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.474708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.474917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.474982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.475167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.475228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.475468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.475528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.475708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.475767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.475961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.476019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.476247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.476316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.476499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.476560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.476758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.476819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.477077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.477136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.477373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.477431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.477654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.477712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.477937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.477996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.478195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.478259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.478509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.478578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.478760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.478821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.479060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.479119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.479317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.479378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.479569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.479628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.479805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.479867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.480052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.480114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.480328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.480390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.480574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.480633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.480821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.480884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.481127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.481199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.481418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.481477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.481665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.481726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.755 [2024-07-15 21:47:27.481919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.755 [2024-07-15 21:47:27.481981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.755 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.482214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.482276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.482501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.482560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.482778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.482838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.483023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.483083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.483325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.483385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.483628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.483687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.483909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.483968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.484211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.484271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.484442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.484500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.484711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.484770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.484988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.485045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.485233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.485293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.485475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.485536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.485769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.485860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.486069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.486133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.486361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.486421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.486647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.486707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.486899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.486961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.487203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.487264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.487547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.487611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.487845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.487904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.488092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.488161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.488399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.488455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.488641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.488699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.488904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.488961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.489192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.489251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.489491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.489548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.489754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.489810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.490045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.490103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:36.756 [2024-07-15 21:47:27.490349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.756 [2024-07-15 21:47:27.490409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:36.756 qpair failed and we were unable to recover it. 00:25:37.040 [2024-07-15 21:47:27.490602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.040 [2024-07-15 21:47:27.490660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.040 qpair failed and we were unable to recover it. 00:25:37.040 [2024-07-15 21:47:27.490894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.040 [2024-07-15 21:47:27.490952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.040 qpair failed and we were unable to recover it. 00:25:37.040 [2024-07-15 21:47:27.491153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.040 [2024-07-15 21:47:27.491214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.040 qpair failed and we were unable to recover it. 00:25:37.040 [2024-07-15 21:47:27.491440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.040 [2024-07-15 21:47:27.491499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.040 qpair failed and we were unable to recover it. 00:25:37.040 [2024-07-15 21:47:27.491730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.040 [2024-07-15 21:47:27.491789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.040 qpair failed and we were unable to recover it. 00:25:37.040 [2024-07-15 21:47:27.492025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.040 [2024-07-15 21:47:27.492087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.040 qpair failed and we were unable to recover it. 00:25:37.040 [2024-07-15 21:47:27.492332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.040 [2024-07-15 21:47:27.492433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.040 qpair failed and we were unable to recover it. 00:25:37.040 [2024-07-15 21:47:27.492689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.040 [2024-07-15 21:47:27.492753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.040 qpair failed and we were unable to recover it. 00:25:37.040 [2024-07-15 21:47:27.492997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.040 [2024-07-15 21:47:27.493077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.040 qpair failed and we were unable to recover it. 00:25:37.040 [2024-07-15 21:47:27.493297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.493359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.493600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.493660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.493875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.493935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.494108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.494187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.494378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.494437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.494643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.494702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.494877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.494936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.495105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.495180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.495357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.495418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.495609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.495670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.495916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.495976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.496170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.496231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.496407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.496466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.496661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.496720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.496965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.497033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.497231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.497291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.497507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.497580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.497761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.497822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.498015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.498075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.498273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.498337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.498542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.498602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.498782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.498842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.499067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.499126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.499322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.499384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.499573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.499631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.499800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.499859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.500093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.500171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.500361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.500423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.500615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.500674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.500864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.500923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.041 [2024-07-15 21:47:27.501117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.041 [2024-07-15 21:47:27.501197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.041 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.501362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.501421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.501590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.501647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.501831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.501890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.502082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.502160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.502359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.502418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.502653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.502711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.502899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.502960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.503154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.503215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.503390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.503448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.503623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.503683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.503857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.503926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.504103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.504187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.504386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.504444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.504638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.504696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.504885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.504945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.505125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.505203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.505430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.505490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.505683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.505745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.505922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.505980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.506173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.506234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.506422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.506482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.506660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.506717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.506884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.506943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.507115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.507188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.507386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.507446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.507622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.507681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.507852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.507910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.508084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.508165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.508348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.508409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.508602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.508660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.042 qpair failed and we were unable to recover it. 00:25:37.042 [2024-07-15 21:47:27.508817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.042 [2024-07-15 21:47:27.508876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.509064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.509123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.509333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.509392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.509578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.509637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.509819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.509878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.510043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.510101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.510297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.510356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.510539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.510607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.510774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.510832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.511046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.511108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.511308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.511368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.511589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.511648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.512808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.512842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.512970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.513033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.513171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.513224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.513354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.513408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.513513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.513592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.513759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.513810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.513900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.513930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.514062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.514092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.514231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.514290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.515797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.515833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.515948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.515976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.516111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.516171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.516310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.516358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.516501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.516556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.516690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.516742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.516903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.516963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.517063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.517093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.043 [2024-07-15 21:47:27.517233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.043 [2024-07-15 21:47:27.517285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.043 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.517406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.517434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.517539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.517578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.517702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.517755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.517870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.517920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.518022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.518091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.518211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.518260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.518379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.518428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.518574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.518629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.518793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.518865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.519011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.519058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.519219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.519278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.519414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.519469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.519592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.519643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.519766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.519816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.519921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.519951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.520089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.520147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.520272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.520323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.520443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.520490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.520620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.520670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.520801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.520847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.520975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.521029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.521188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.521219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.521337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.521389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.521474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.521503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.521622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.521673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.521762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.521792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.521895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.521925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.522032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.522061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.522163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.522191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.522280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.522309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.522427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.044 [2024-07-15 21:47:27.522479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.044 qpair failed and we were unable to recover it. 00:25:37.044 [2024-07-15 21:47:27.522557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.522588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.522681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.522709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.522789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.522815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.522908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.522935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.523023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.523051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.523156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.523199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.523303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.523331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.523431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.523461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.523561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.523590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.523688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.523722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.523806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.523834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.523927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.523956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.524108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.524152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.524250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.524281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.524401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.524431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.524580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.524639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.524765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.524821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.524972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.525032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.525173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.525202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.525341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.525403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.525543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.525593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.525694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.525749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.525836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.525865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.526015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.526094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.526265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.526297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.526425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.526473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.526586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.526638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.526832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.526899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.527070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.527163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.527359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.527428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.045 qpair failed and we were unable to recover it. 00:25:37.045 [2024-07-15 21:47:27.527641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.045 [2024-07-15 21:47:27.527700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.046 qpair failed and we were unable to recover it. 00:25:37.046 [2024-07-15 21:47:27.527835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.046 [2024-07-15 21:47:27.527890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.046 qpair failed and we were unable to recover it. 00:25:37.046 [2024-07-15 21:47:27.528075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.046 [2024-07-15 21:47:27.528118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.046 qpair failed and we were unable to recover it. 00:25:37.046 [2024-07-15 21:47:27.528289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.046 [2024-07-15 21:47:27.528332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.046 qpair failed and we were unable to recover it. 00:25:37.046 [2024-07-15 21:47:27.528470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.046 [2024-07-15 21:47:27.528543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.046 qpair failed and we were unable to recover it. 00:25:37.046 [2024-07-15 21:47:27.528707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.046 [2024-07-15 21:47:27.528776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.046 qpair failed and we were unable to recover it. 00:25:37.046 [2024-07-15 21:47:27.528960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.046 [2024-07-15 21:47:27.529028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.046 qpair failed and we were unable to recover it. 00:25:37.046 [2024-07-15 21:47:27.529195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.046 [2024-07-15 21:47:27.529224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.046 qpair failed and we were unable to recover it. 00:25:37.046 [2024-07-15 21:47:27.529369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.046 [2024-07-15 21:47:27.529416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.046 qpair failed and we were unable to recover it. 00:25:37.046 [2024-07-15 21:47:27.529545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.046 [2024-07-15 21:47:27.529596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.046 qpair failed and we were unable to recover it. 00:25:37.046 [2024-07-15 21:47:27.529710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.046 [2024-07-15 21:47:27.529760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.046 qpair failed and we were unable to recover it. 00:25:37.046 [2024-07-15 21:47:27.529853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.046 [2024-07-15 21:47:27.529882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.046 qpair failed and we were unable to recover it. 00:25:37.046 [2024-07-15 21:47:27.530014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.046 [2024-07-15 21:47:27.530062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.046 qpair failed and we were unable to recover it. 00:25:37.046 [2024-07-15 21:47:27.530156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.046 [2024-07-15 21:47:27.530186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.046 qpair failed and we were unable to recover it. 00:25:37.046 [2024-07-15 21:47:27.530307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.046 [2024-07-15 21:47:27.530360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.046 qpair failed and we were unable to recover it. 00:25:37.046 [2024-07-15 21:47:27.530466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.046 [2024-07-15 21:47:27.530520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.046 qpair failed and we were unable to recover it. 00:25:37.046 [2024-07-15 21:47:27.530636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.046 [2024-07-15 21:47:27.530687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.046 qpair failed and we were unable to recover it. 00:25:37.046 [2024-07-15 21:47:27.530806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.046 [2024-07-15 21:47:27.530856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.046 qpair failed and we were unable to recover it. 00:25:37.046 [2024-07-15 21:47:27.530971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.046 [2024-07-15 21:47:27.531021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.046 qpair failed and we were unable to recover it. 00:25:37.046 [2024-07-15 21:47:27.531179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.046 [2024-07-15 21:47:27.531256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.046 qpair failed and we were unable to recover it. 00:25:37.046 [2024-07-15 21:47:27.531400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.046 [2024-07-15 21:47:27.531459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.046 qpair failed and we were unable to recover it. 00:25:37.046 [2024-07-15 21:47:27.531629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.046 [2024-07-15 21:47:27.531696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.046 qpair failed and we were unable to recover it. 00:25:37.046 [2024-07-15 21:47:27.531847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.046 [2024-07-15 21:47:27.531914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.046 qpair failed and we were unable to recover it. 00:25:37.046 [2024-07-15 21:47:27.532058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.046 [2024-07-15 21:47:27.532118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.532333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.532403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.532555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.532624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.532771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.532831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.533019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.533097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.533258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.533316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.533512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.533571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.533679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.533733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.533865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.533909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.534094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.534168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.534304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.534378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.534544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.534589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.534752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.534799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.534920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.534977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.535175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.535229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.535351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.535404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.535656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.535717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.535872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.535929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.536129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.536186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.536341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.536390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.536605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.536669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.536914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.536979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.537169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.537198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.537350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.537400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.537525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.537574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.537720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.537764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.537903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.537961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.538092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.538149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.538312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.047 [2024-07-15 21:47:27.538340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.047 qpair failed and we were unable to recover it. 00:25:37.047 [2024-07-15 21:47:27.538515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.538557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.538736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.538780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.538954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.539054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.539323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.539388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.539547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.539618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.539814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.539858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.539994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.540036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.540187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.540233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.540457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.540515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.540652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.540710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.540877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.540939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.541062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.541111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.541225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.541292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.541522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.541581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.541752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.541825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.542054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.542113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.542274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.542306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.542469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.542513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.542614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.542669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.542791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.542841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.542986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.543034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.543178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.543225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.543439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.543502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.543625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.543680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.543896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.543962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.544100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.544195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.544357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.544415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.544579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.544604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.544741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.544788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.545028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.545086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.545261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.545305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.545478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.545524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.048 [2024-07-15 21:47:27.545743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.048 [2024-07-15 21:47:27.545784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.048 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.545905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.545958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.546175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.546201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.546343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.546376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.546494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.546544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.546671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.546719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.546847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.546891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.547094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.547153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.547243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.547270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.547368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.547420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.547516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.547561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.547673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.547727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.547834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.547863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.547959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.547991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.548084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.548114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.548276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.548325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.548448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.548505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.548619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.548668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.548798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.548842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.549050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.549113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.549246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.549309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.549416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.549468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.549583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.549632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.549717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.549744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.549831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.549859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.549942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.549969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.550058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.550085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.550175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.550203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.550283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.550311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.550402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.550429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.550545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.550572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.550686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.049 [2024-07-15 21:47:27.550712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.049 qpair failed and we were unable to recover it. 00:25:37.049 [2024-07-15 21:47:27.550827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.550855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.550973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.550999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.551105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.551131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.551267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.551296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.551463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.551520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.551606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.551633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.551721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.551749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.551913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.551981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.552151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.552203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.552360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.552426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.552639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.552689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.552797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.552846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.553002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.553054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.553207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.553256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.553399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.553429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.553538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.553597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.553771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.553831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.553936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.553990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.554069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.554097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.554208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.554255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.554349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.554375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.554496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.554549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.554638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.554667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.554818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.554863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.554986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.555044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.555181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.555209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.555429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.555489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.555598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.555648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.555809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.555857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.556072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.556176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.556354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.556401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.556620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.556680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.556835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.556877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.557090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.557159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.557319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.050 [2024-07-15 21:47:27.557391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.050 qpair failed and we were unable to recover it. 00:25:37.050 [2024-07-15 21:47:27.557568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.557621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.557734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.557783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.557902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.557962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.558092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.558148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.558238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.558267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.558373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.558424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.558545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.558597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.558694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.558723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.558861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.558913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.559043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.559093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.559314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.559381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.559500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.559555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.559766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.559833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.560049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.560110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.560308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.560367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.560497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.560548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.560658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.560707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.560825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.560874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.560977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.561032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.561161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.561207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.561317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.561372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.561480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.561532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.561617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.561646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.561793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.561825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.561993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.562060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.562202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.562247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.562414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.562482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.051 qpair failed and we were unable to recover it. 00:25:37.051 [2024-07-15 21:47:27.562712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.051 [2024-07-15 21:47:27.562779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.562910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.562955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.563167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.563233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.563511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.563571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.563720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.563766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.563902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.563946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.564085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.564129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.564360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.564423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.564586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.564638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.564750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.564800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.564918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.564967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.565109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.565171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.565333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.565397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.565515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.565563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.565648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.565687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.565807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.565858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.566017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.566077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.566169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.566199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.566317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.566347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.566483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.566528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.566608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.566640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.566744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.566795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.566914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.566963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.567151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.567199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.567310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.567360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.567483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.052 [2024-07-15 21:47:27.567510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.052 qpair failed and we were unable to recover it. 00:25:37.052 [2024-07-15 21:47:27.567593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.567619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.567778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.567847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.568027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.568072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.568161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.568191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.568303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.568343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.568517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.568563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.568646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.568674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.568795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.568842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.569014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.569060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.569160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.569199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.569279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.569318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.569424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.569450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.569556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.569589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.569694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.569724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.569838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.569868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.569968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.569997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.570150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.570193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.570305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.570354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.570491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.570545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.570651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.570700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.570790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.570818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.570965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.571022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.571159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.571187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.571354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.571450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.571657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.571734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.571896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.571938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.572113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.572168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.572357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.572422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.572613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.572656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.572795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.572862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.053 [2024-07-15 21:47:27.573012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.053 [2024-07-15 21:47:27.573053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.053 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.573198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.573254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.573416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.573479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.573603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.573653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.573755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.573796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.573989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.574038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.574136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.574168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.574279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.574319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.574436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.574516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.574668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.574713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.574830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.574878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.575022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.575069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.575152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.575182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.575271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.575300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.575390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.575421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.575535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.575584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.575711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.575776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.575919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.575987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.576213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.576250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.576475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.576532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.576690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.576761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.576912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.576971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.577182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.577242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.577378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.577419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.577541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.577608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.577744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.577784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.577913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.577984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.578125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.578160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.578262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.578314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.578399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.578428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.578536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.578586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.578668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.578695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.578783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.054 [2024-07-15 21:47:27.578810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.054 qpair failed and we were unable to recover it. 00:25:37.054 [2024-07-15 21:47:27.578889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.578917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.579003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.579032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.579118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.579150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.579247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.579275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.579382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.579408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.579502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.579531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.579619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.579647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.579783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.579814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.579911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.579941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.580044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.580074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.580200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.580230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.580349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.580380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.580527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.580569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.580674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.580705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.580812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.580839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.580934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.580963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.581055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.581084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.581241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.581294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.581404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.581452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.581566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.581614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.581756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.581812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.581911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.581951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.582055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.582085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.582178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.582207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.582290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.582319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.582404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.582447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.582558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.582610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.582707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.582735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.582835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.582864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.582947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.582976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.583093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.055 [2024-07-15 21:47:27.583126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.055 qpair failed and we were unable to recover it. 00:25:37.055 [2024-07-15 21:47:27.583280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.583310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.583417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.583467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.583553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.583582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.583664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.583693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.583776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.583805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.583927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.583956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.584060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.584106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.584269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.584335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.584490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.584565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.584721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.584780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.584915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.584967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.585114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.585209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.586486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.586521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.586628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.586669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.586771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.586825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.586908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.586936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.587058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.587087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.587254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.587303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.587413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.587468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.587575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.587625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.587718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.587747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.587833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.587866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.587998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.588038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.588136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.588199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.588399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.588429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.588536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.588586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.588696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.588746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.588865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.588912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.589056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.589110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.589274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.589329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.589507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.589564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.589681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.589728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.589810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.589837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.590029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.590092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.590318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.590379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.056 qpair failed and we were unable to recover it. 00:25:37.056 [2024-07-15 21:47:27.590639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.056 [2024-07-15 21:47:27.590698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.057 qpair failed and we were unable to recover it. 00:25:37.057 [2024-07-15 21:47:27.590891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.057 [2024-07-15 21:47:27.590966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.057 qpair failed and we were unable to recover it. 00:25:37.057 [2024-07-15 21:47:27.591198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.057 [2024-07-15 21:47:27.591259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.057 qpair failed and we were unable to recover it. 00:25:37.057 [2024-07-15 21:47:27.591375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.057 [2024-07-15 21:47:27.591401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.057 qpair failed and we were unable to recover it. 00:25:37.057 [2024-07-15 21:47:27.591545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.057 [2024-07-15 21:47:27.591592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.057 qpair failed and we were unable to recover it. 00:25:37.057 [2024-07-15 21:47:27.591690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.057 [2024-07-15 21:47:27.591719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.057 qpair failed and we were unable to recover it. 00:25:37.057 [2024-07-15 21:47:27.591871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.057 [2024-07-15 21:47:27.591930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.057 qpair failed and we were unable to recover it. 00:25:37.057 [2024-07-15 21:47:27.592032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.057 [2024-07-15 21:47:27.592062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.057 qpair failed and we were unable to recover it. 00:25:37.057 [2024-07-15 21:47:27.592222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.057 [2024-07-15 21:47:27.592274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.057 qpair failed and we were unable to recover it. 00:25:37.057 [2024-07-15 21:47:27.592375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.057 [2024-07-15 21:47:27.592416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.057 qpair failed and we were unable to recover it. 00:25:37.057 [2024-07-15 21:47:27.592579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.057 [2024-07-15 21:47:27.592633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.057 qpair failed and we were unable to recover it. 00:25:37.057 [2024-07-15 21:47:27.592749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.057 [2024-07-15 21:47:27.592795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.057 qpair failed and we were unable to recover it. 00:25:37.057 [2024-07-15 21:47:27.592963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.057 [2024-07-15 21:47:27.593011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.057 qpair failed and we were unable to recover it. 00:25:37.057 [2024-07-15 21:47:27.593243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.057 [2024-07-15 21:47:27.593296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.057 qpair failed and we were unable to recover it. 00:25:37.057 [2024-07-15 21:47:27.593434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.057 [2024-07-15 21:47:27.593478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.057 qpair failed and we were unable to recover it. 00:25:37.057 [2024-07-15 21:47:27.593594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.057 [2024-07-15 21:47:27.593648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.057 qpair failed and we were unable to recover it. 00:25:37.057 [2024-07-15 21:47:27.593799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.057 [2024-07-15 21:47:27.593841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.057 qpair failed and we were unable to recover it. 00:25:37.057 [2024-07-15 21:47:27.593926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.057 [2024-07-15 21:47:27.593953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.057 qpair failed and we were unable to recover it. 00:25:37.057 [2024-07-15 21:47:27.594108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.057 [2024-07-15 21:47:27.594163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.057 qpair failed and we were unable to recover it. 00:25:37.057 [2024-07-15 21:47:27.594348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.057 [2024-07-15 21:47:27.594413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.057 qpair failed and we were unable to recover it. 00:25:37.057 [2024-07-15 21:47:27.594575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.057 [2024-07-15 21:47:27.594632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.057 qpair failed and we were unable to recover it. 00:25:37.057 [2024-07-15 21:47:27.594790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.057 [2024-07-15 21:47:27.594844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.057 qpair failed and we were unable to recover it. 00:25:37.057 [2024-07-15 21:47:27.594953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.057 [2024-07-15 21:47:27.595013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.057 qpair failed and we were unable to recover it. 00:25:37.057 [2024-07-15 21:47:27.595152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.057 [2024-07-15 21:47:27.595198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.057 qpair failed and we were unable to recover it. 00:25:37.057 [2024-07-15 21:47:27.595403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.595461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.595598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.595668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.595808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.595877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.596105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.596177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.596372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.596420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.596579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.596628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.596713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.596741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.596927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.596958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.597109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.597164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.597283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.597329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.597452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.597515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.597600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.597629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.597741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.597788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.597874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.597902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.598012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.598062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.598198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.598225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.598315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.598342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.598463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.598490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.598574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.598601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.598715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.598742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.598923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.598978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.599073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.599102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.599264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.599307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.599459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.599541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.599668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.599735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.599928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.599984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.600108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.600191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.600308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.600355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.600483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.600528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.600633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.600687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.600816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.600868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.058 qpair failed and we were unable to recover it. 00:25:37.058 [2024-07-15 21:47:27.600968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.058 [2024-07-15 21:47:27.601021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.601152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.601197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.601324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.601374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.601526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.601579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.601700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.601747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.601904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.601970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.602178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.602239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.602419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.602484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.602638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.602665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.602826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.602883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.603031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.603098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.603253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.603295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.603518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.603579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.603754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.603813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.603982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.604040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.604216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.604268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.604398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.604445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.604548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.604589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.604684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.604712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.604818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.604869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.604997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.605050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.605158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.605186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.605294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.605321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.605423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.605476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.605661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.605709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.605863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.605933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.606082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.606185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.606390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.606451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.606589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.606657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.606804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.606830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.606998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.607055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.607195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.059 [2024-07-15 21:47:27.607249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.059 qpair failed and we were unable to recover it. 00:25:37.059 [2024-07-15 21:47:27.607420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.607465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.607635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.607675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.607788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.607817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.607927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.607975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.608081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.608109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.608221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.608262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.608391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.608455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.608560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.608614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.608723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.608781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.608926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.608956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.609065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.609136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.609312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.609374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.609530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.609593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.609750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.609818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.609974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.610018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.610143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.610172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.610288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.610337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.610423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.610451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.610619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.610673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.610757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.610785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.610869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.610896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.611015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.611043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.611211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.611258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.611359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.611388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.611495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.611547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.611630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.611658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.611794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.611841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.611937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.611966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.612087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.612115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.612218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.612247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.612339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.060 [2024-07-15 21:47:27.612366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.060 qpair failed and we were unable to recover it. 00:25:37.060 [2024-07-15 21:47:27.612482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.612527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.612660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.612723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.612864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.612919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.613022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.613050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.613189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.613221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.613415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.613459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.613641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.613700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.613856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.613918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.614131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.614205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.614417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.614460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.614612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.614660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.614768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.614816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.614897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.614924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.615079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.615147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.615234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.615261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.615339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.615371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.615453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.615480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.615618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.615680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.615835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.615862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.615959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.615985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.616084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.616164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.616296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.616347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.616555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.616612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.616747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.616788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.616913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.616971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.617124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.617176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.617351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.617408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.617497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.617526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.617681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.617743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.617831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.617859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.618059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.618125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.618292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.618335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.061 [2024-07-15 21:47:27.618526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.061 [2024-07-15 21:47:27.618579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.061 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.618741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.618769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.618971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.619028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.619197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.619255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.619404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.619471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.619609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.619679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.619832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.619872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.620013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.620077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.620225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.620281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.620420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.620482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.620619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.620679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.620879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.620937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.621122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.621175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.621306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.621374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.621538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.621579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.621709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.621777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.621933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.621990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.622094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.622121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.622242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.622288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.622375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.622403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.622509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.622537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.622658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.622708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.622828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.622899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.623130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.623195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.623343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.623402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.623538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.623606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.623806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.623848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.624032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.624098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.624277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.624330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.624419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.624447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.624558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.624610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.062 qpair failed and we were unable to recover it. 00:25:37.062 [2024-07-15 21:47:27.624715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.062 [2024-07-15 21:47:27.624764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.063 qpair failed and we were unable to recover it. 00:25:37.063 [2024-07-15 21:47:27.624881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.063 [2024-07-15 21:47:27.624929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.063 qpair failed and we were unable to recover it. 00:25:37.063 [2024-07-15 21:47:27.625017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.063 [2024-07-15 21:47:27.625051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.063 qpair failed and we were unable to recover it. 00:25:37.063 [2024-07-15 21:47:27.625150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.063 [2024-07-15 21:47:27.625181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.063 qpair failed and we were unable to recover it. 00:25:37.063 [2024-07-15 21:47:27.625276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.063 [2024-07-15 21:47:27.625304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.063 qpair failed and we were unable to recover it. 00:25:37.063 [2024-07-15 21:47:27.625419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.063 [2024-07-15 21:47:27.625461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.063 qpair failed and we were unable to recover it. 00:25:37.063 [2024-07-15 21:47:27.625586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.063 [2024-07-15 21:47:27.625647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.063 qpair failed and we were unable to recover it. 00:25:37.063 [2024-07-15 21:47:27.625779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.063 [2024-07-15 21:47:27.625849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.063 qpair failed and we were unable to recover it. 00:25:37.063 [2024-07-15 21:47:27.625985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.063 [2024-07-15 21:47:27.626037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.063 qpair failed and we were unable to recover it. 00:25:37.063 [2024-07-15 21:47:27.626178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.063 [2024-07-15 21:47:27.626207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.063 qpair failed and we were unable to recover it. 00:25:37.063 [2024-07-15 21:47:27.626368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.063 [2024-07-15 21:47:27.626435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.063 qpair failed and we were unable to recover it. 00:25:37.063 [2024-07-15 21:47:27.626571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.063 [2024-07-15 21:47:27.626612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.063 qpair failed and we were unable to recover it. 00:25:37.063 [2024-07-15 21:47:27.626747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.063 [2024-07-15 21:47:27.626774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.063 qpair failed and we were unable to recover it. 00:25:37.063 [2024-07-15 21:47:27.626923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.063 [2024-07-15 21:47:27.626949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.063 qpair failed and we were unable to recover it. 00:25:37.063 [2024-07-15 21:47:27.627068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.063 [2024-07-15 21:47:27.627097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.063 qpair failed and we were unable to recover it. 00:25:37.063 [2024-07-15 21:47:27.627205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.063 [2024-07-15 21:47:27.627237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.063 qpair failed and we were unable to recover it. 00:25:37.063 [2024-07-15 21:47:27.627416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.063 [2024-07-15 21:47:27.627445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.063 qpair failed and we were unable to recover it. 00:25:37.063 [2024-07-15 21:47:27.627603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.063 [2024-07-15 21:47:27.627675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.063 qpair failed and we were unable to recover it. 00:25:37.063 [2024-07-15 21:47:27.627866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.063 [2024-07-15 21:47:27.627940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.063 qpair failed and we were unable to recover it. 00:25:37.063 [2024-07-15 21:47:27.628151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.063 [2024-07-15 21:47:27.628215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.063 qpair failed and we were unable to recover it. 00:25:37.063 [2024-07-15 21:47:27.628386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.628449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.628557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.628624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.628759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.628808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.628953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.629023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.629150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.629201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.629295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.629323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.629425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.629452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.629572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.629633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.629740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.629787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.629872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.629899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.629988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.630043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.630188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.630256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.630409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.630473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.630631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.630687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.630795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.630844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.630949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.630977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.631073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.631101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.631208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.631259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.631345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.631373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.631463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.631490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.631595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.631645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.631748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.631796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.631903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.631952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.632041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.632068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.632157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.632215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.632350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.632416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.632556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.632606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.632731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.632781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.632921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.632986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.633115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.633178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.633295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.633365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.064 qpair failed and we were unable to recover it. 00:25:37.064 [2024-07-15 21:47:27.633503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.064 [2024-07-15 21:47:27.633574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.633727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.633793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.633955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.633985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.634118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.634217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.634364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.634423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.634591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.634619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.634744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.634784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.634899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.634945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.635059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.635101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.635252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.635293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.635409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.635446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.635544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.635571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.635679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.635727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.635821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.635848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.635932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.635960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.636048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.636075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.636165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.636194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.636286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.636314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.636399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.636425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.636514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.636542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.636621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.636647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.636733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.636762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.636849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.636881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.636963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.636990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.637077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.637104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.637208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.637236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.637359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.637429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.637567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.637620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.637740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.637781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.637917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.637991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.065 qpair failed and we were unable to recover it. 00:25:37.065 [2024-07-15 21:47:27.638134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.065 [2024-07-15 21:47:27.638194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.638313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.638385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.638543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.638583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.638759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.638789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.638953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.639009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.639166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.639195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.639314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.639361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.639449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.639478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.639583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.639632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.639710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.639737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.639818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.639845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.639930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.639959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.640050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.640080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.640162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.640190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.640274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.640302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.640391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.640420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.640508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.640536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.640635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.640686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.640796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.640843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.640974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.641052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.641177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.641230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.641376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.641406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.641532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.641618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.641770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.641829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.641965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.642011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.642120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.642159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.642251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.642278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.642381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.642430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.642530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.642580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.642664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.642691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.642785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.642812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.642901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.066 [2024-07-15 21:47:27.642928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.066 qpair failed and we were unable to recover it. 00:25:37.066 [2024-07-15 21:47:27.643014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.643045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.643149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.643178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.643274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.643303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.643405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.643475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.643647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.643708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.643856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.643884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.644038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.644096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.644256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.644285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.644372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.644402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.644497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.644555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.644695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.644763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.644894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.644922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.645076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.645115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.645255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.645306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.645460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.645514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.645662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.645731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.645858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.645905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.646020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.646067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.646209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.646274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.646412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.646481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.646631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.646658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.646815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.646851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.647017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.647083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.647226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.647273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.647385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.647433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.647551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.647599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.647723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.647768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.647884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.647913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.648026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.648090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.648207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.067 [2024-07-15 21:47:27.648257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.067 qpair failed and we were unable to recover it. 00:25:37.067 [2024-07-15 21:47:27.648341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.648368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.648452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.648480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.648562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.648589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.648671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.648697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.648787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.648814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.648896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.648922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.649013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.649041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.649127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.649161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.649245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.649273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.649365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.649395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.649488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.649514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.649599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.649626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.649717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.649745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.649838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.649865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.649958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.649988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.650083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.650111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.650228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.650266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.650375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.650411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.650530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.650570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.650676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.650725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.650838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.650874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.650993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.651056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.651218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.651258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.651443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.651485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.651654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.651685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.651817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.651864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.651972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.652036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.652172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.652231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.652352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.652422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.068 qpair failed and we were unable to recover it. 00:25:37.068 [2024-07-15 21:47:27.652606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.068 [2024-07-15 21:47:27.652665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-15 21:47:27.652814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-15 21:47:27.652842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-15 21:47:27.652967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-15 21:47:27.652994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-15 21:47:27.653075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-15 21:47:27.653103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-15 21:47:27.653204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-15 21:47:27.653249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-15 21:47:27.653354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-15 21:47:27.653419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-15 21:47:27.653574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-15 21:47:27.653635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-15 21:47:27.653774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-15 21:47:27.653843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-15 21:47:27.653994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-15 21:47:27.654032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-15 21:47:27.654200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-15 21:47:27.654264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-15 21:47:27.654401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-15 21:47:27.654442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-15 21:47:27.654549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-15 21:47:27.654614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-15 21:47:27.654744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-15 21:47:27.654810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-15 21:47:27.654968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-15 21:47:27.654999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-15 21:47:27.655120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-15 21:47:27.655218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-15 21:47:27.655316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-15 21:47:27.655347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-15 21:47:27.655450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-15 21:47:27.655486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-15 21:47:27.655589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-15 21:47:27.655616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-15 21:47:27.655726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-15 21:47:27.655753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-15 21:47:27.655865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-15 21:47:27.655896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-15 21:47:27.655982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-15 21:47:27.656010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-15 21:47:27.656103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.069 [2024-07-15 21:47:27.656131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.069 qpair failed and we were unable to recover it. 00:25:37.069 [2024-07-15 21:47:27.656261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.656316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.656445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.656511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.656637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.656706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.656838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.656882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.657001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.657028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.657169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.657197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.657309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.657354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.657469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.657534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.657681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.657709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.657870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.657930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.658075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.658106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.658204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.658233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.658338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.658398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.658483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.658514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.658603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.658630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.658714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.658741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.658821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.658851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.658951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.658993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.659160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.659195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.659294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.659324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.659415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.659445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.659543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.659579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.659674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.659701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.659801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.659829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.659951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.659999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.660089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.660116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.660230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.660279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.660371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.660399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.660483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.660509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.660595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.070 [2024-07-15 21:47:27.660623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.070 qpair failed and we were unable to recover it. 00:25:37.070 [2024-07-15 21:47:27.660733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.660781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.660861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.660888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.660970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.660997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.661093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.661127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.661238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.661271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.661364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.661391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.661518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.661589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.661741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.661790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.661920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.661974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.662123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.662163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.662362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.662453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.662607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.662642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.662746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.662828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.662924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.662952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.663068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.663096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.663221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.663269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.663382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.663427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.663506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.663532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.663615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.663642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.663719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.663745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.663832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.663860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.663948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.663976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.664063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.664090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.664174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.664206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.664291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.664318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.664407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.664434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.664511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.664538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.664623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.664651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.664733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.664760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.664870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.664916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.071 [2024-07-15 21:47:27.665002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.071 [2024-07-15 21:47:27.665030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.071 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.665147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.665197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.665283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.665310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.665418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.665466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.665566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.665627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.665720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.665748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.665832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.665859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.665951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.665981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.666071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.666098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.666194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.666223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.666310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.666336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.666414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.666440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.666518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.666544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.666633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.666679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.666813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.666849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.666970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.666998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.667114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.667158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.667259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.667293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.667401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.667445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.667560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.667615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.667786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.667861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.667991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.668038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.668164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.668195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.668281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.668309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.668422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.668468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.668579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.668621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.668722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.668768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.668855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.668882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.668961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.668988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.669080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.669106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.669229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.669296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.669429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.669511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.072 [2024-07-15 21:47:27.669643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.072 [2024-07-15 21:47:27.669707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.072 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.669836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.669896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.670042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.670089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.670209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.670245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.670373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.670456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.670613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.670701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.670859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.670889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.671019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.671087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.671229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.671296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.671432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.671493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.671631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.671696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.671826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.671893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.672015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.672080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.672252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.672288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.672448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.672507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.672647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.672711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.672852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.672887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.672993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.673046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.673185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.673251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.673377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.673444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.673564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.673634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.673766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.673809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.673935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.673965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.674064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.674111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.674228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.674293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.674396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.674424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.674530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.674557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.674672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.674700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.674825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.674891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.675022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.675083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.675238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.675266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.675380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.675441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.675574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.073 [2024-07-15 21:47:27.675635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.073 qpair failed and we were unable to recover it. 00:25:37.073 [2024-07-15 21:47:27.675766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.675810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.675947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.675974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.676099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.676165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.676299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.676363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.676494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.676562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.676699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.676743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.676849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.676901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.677045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.677076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.677164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.677191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.677303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.677349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.677441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.677476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.677595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.677641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.677731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.677759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.677845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.677872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.677964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.677992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.678075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.678102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.678191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.678217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.678305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.678331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.678407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.678434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.678520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.678547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.678623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.678650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.678744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.678771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.678883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.678956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.679077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.679151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.679275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.074 [2024-07-15 21:47:27.679321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.074 qpair failed and we were unable to recover it. 00:25:37.074 [2024-07-15 21:47:27.679439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.679506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.679638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.679682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.679800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.679835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.679947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.680020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.680172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.680210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.680342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.680375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.680493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.680520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.680650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.680685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.680782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.680816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.680913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.680947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.681120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.681205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.681320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.681354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.681444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.681472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.681576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.681641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.681767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.681822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.681969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.682051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.682163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.682206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.682322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.682384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.682515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.682555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.682668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.682703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.682815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.682873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.683002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.683047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.683181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.683208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.683361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.683396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.683516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.683560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.683687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.683715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.683827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.683889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.684020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.684063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.684191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.684232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.684339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.684372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.684479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.684512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.075 [2024-07-15 21:47:27.684634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.075 [2024-07-15 21:47:27.684676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.075 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.684798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.684838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.684956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.684998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.685123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.685180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.685297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.685346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.685464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.685493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.685615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.685645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.685751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.685798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.685896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.685926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.686018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.686045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.686153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.686209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.686296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.686323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.686405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.686432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.686514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.686542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.686633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.686660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.686741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.686767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.686851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.686878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.686976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.687003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.687107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.687160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.687256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.687297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.687395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.687422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.687526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.687573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.687660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.687688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.687777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.687804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.687908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.687945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.688049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.688093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.688217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.688252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.688344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.688373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.688465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.688492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.688587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.688622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.688733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.688794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.688936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.689002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.689175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.689228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.076 [2024-07-15 21:47:27.689464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.076 [2024-07-15 21:47:27.689505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.076 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.689669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.689698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.689783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.689810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.689922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.689970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.690087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.690143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.690232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.690259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.690361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.690408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.690521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.690565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.690668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.690715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.690794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.690821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.690927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.690971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.691061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.691088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.691175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.691203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.691289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.691321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.691409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.691437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.691527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.691553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.691639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.691665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.691752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.691778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.691868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.691895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.691978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.692005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.692090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.692117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.692222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.692304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.692476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.692533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.692659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.692710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.692845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.692878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.692975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.693007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.693111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.693152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.693266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.693324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.693452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.693496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.693620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.693649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.077 [2024-07-15 21:47:27.693769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.077 [2024-07-15 21:47:27.693812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.077 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.693923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.693987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.694108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.694177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.694311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.694354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.694479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.694507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.694627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.694683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.694810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.694859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.694992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.695025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.695129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.695172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.695286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.695315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.695443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.695475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.695579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.695618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.695744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.695771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.695910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.695943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.696052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.696098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.696242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.696271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.696395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.696423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.696548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.696595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.696721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.696754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.696897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.696930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.697059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.697093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.697232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.697267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.697362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.697394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.697502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.697554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.697677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.697707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.697813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.697841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.697963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.698019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.698119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.698167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.698268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.698314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.698424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.698468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.698568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.698601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.698716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.698764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.078 qpair failed and we were unable to recover it. 00:25:37.078 [2024-07-15 21:47:27.698853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.078 [2024-07-15 21:47:27.698881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.698989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.699025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.699148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.699175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.699263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.699289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.699391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.699423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.699545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.699572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.699685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.699744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.699870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.699899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.699999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.700032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.700152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.700182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.700299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.700334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.700454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.700499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.700591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.700624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.700734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.700767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.700864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.700891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.700978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.701006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.701083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.701110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.701210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.701237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.701333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.701361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.701452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.701480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.701559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.701586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.701687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.701714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.701808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.701836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.701928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.701955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.702032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.702059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.702149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.702177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.702259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.702286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.702378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.702406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.702494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.702521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.702602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.702629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.702715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.079 [2024-07-15 21:47:27.702743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.079 qpair failed and we were unable to recover it. 00:25:37.079 [2024-07-15 21:47:27.702828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-15 21:47:27.702859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-15 21:47:27.702942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-15 21:47:27.702969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-15 21:47:27.703050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-15 21:47:27.703076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-15 21:47:27.703156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-15 21:47:27.703184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-15 21:47:27.703269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-15 21:47:27.703298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-15 21:47:27.703379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-15 21:47:27.703407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-15 21:47:27.703491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-15 21:47:27.703519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-15 21:47:27.703605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-15 21:47:27.703633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-15 21:47:27.703710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-15 21:47:27.703737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-15 21:47:27.703825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-15 21:47:27.703851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-15 21:47:27.703936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-15 21:47:27.703963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-15 21:47:27.704040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-15 21:47:27.704066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-15 21:47:27.704151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-15 21:47:27.704180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-15 21:47:27.704270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-15 21:47:27.704297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-15 21:47:27.704385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-15 21:47:27.704413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-15 21:47:27.704493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-15 21:47:27.704520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-15 21:47:27.704598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-15 21:47:27.704625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-15 21:47:27.704719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-15 21:47:27.704748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-15 21:47:27.704852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-15 21:47:27.704880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-15 21:47:27.704973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-15 21:47:27.705000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-15 21:47:27.705085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-15 21:47:27.705112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-15 21:47:27.705207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-15 21:47:27.705237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-15 21:47:27.705317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.080 [2024-07-15 21:47:27.705343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.080 qpair failed and we were unable to recover it. 00:25:37.080 [2024-07-15 21:47:27.705427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.705454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.705532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.705570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.705674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.705709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.705813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.705848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.705982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.706017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.706146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.706191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.706290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.706324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.706438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.706465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.706581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.706625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.706704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.706731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.706812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.706840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.706935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.706967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.707060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.707089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.707175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.707202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.707291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.707319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.707408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.707435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.707513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.707539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.707622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.707655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.707754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.707789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.707889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.707929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.708034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.708069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.708159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.708217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.708341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.708384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.708496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.708541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.708643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.708701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.708847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.708878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.709018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.709065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.709196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.709225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.709312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.709339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.709446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.709492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.709600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.709628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.709730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.081 [2024-07-15 21:47:27.709757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.081 qpair failed and we were unable to recover it. 00:25:37.081 [2024-07-15 21:47:27.709851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.709878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.709964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.709993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.710088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.710115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.710201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.710229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.710307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.710334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.710418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.710445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.710522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.710548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.710657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.710702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.710810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.710842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.710957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.710988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.711093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.711121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.711221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.711253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.711365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.711397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.711506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.711534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.711652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.711710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.711811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.711839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.711952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.711982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.712067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.712094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.712185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.712212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.712297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.712324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.712408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.712435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.712524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.712574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.712691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.712739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.712867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.712899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.712997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.713040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.713164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.713215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.713316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.713369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.713488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.713543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.713676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.713705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.713802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.713829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.082 qpair failed and we were unable to recover it. 00:25:37.082 [2024-07-15 21:47:27.713948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.082 [2024-07-15 21:47:27.713992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.714077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.714104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.714204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.714242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.714358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.714401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.714506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.714536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.714626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.714661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.714764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.714812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.714937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.715001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.715131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.715186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.715348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.715387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.715561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.715596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.715688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.715719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.715801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.715829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.715916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.715944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.716024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.716052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.716184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.716218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.716356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.716396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.716483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.716510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.716606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.716634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.716738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.716765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.716851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.716877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.716966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.716993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.717098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.717164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.717278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.717333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.717438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.717482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.717578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.717610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.717720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.717751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.717850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.717878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.717965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.717992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.718095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.718123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.718247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.718279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.718397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.718442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.718546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.718590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.083 [2024-07-15 21:47:27.718701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.083 [2024-07-15 21:47:27.718745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.083 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.718837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.718865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.718947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.718975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.719071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.719099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.719204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.719232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.719322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.719348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.719432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.719459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.719546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.719572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.719690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.719729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.719838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.719869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.719971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.720050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.720188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.720261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.720378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.720437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.720558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.720590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.720684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.720738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.720872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.720900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.721016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.721084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.721214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.721273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.721396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.721427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.721523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.721594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.721725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.721793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.721906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.721936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.722034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.722066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.722170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.722209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.722293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.722319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.722396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.722423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.722509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.722535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.722708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.722760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.722873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.722921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.723032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.723082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.723191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.723217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.723329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.723362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.723491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.723546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.723679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.723746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.084 qpair failed and we were unable to recover it. 00:25:37.084 [2024-07-15 21:47:27.723864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.084 [2024-07-15 21:47:27.723910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.724000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.724029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.724132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.724177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.724294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.724326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.724437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.724465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.724569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.724602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.724715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.724743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.724834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.724862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.724968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.725010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.725100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.725128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.725251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.725284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.725398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.725425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.725544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.725598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.725728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.725788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.725922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.725982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.726116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.726165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.726286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.726317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.726405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.726433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.726525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.726552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.726633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.726660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.726742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.726770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.726858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.726885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.726963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.726994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.727080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.727110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.727206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.727234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.727334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.727362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.727449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.085 [2024-07-15 21:47:27.727479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.085 qpair failed and we were unable to recover it. 00:25:37.085 [2024-07-15 21:47:27.727577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.727609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.727727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.727768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.727848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.727876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.727979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.728014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.728118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.728169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.728252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.728279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.728375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.728403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.728501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.728531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.728631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.728678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.728783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.728825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.728917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.728944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.729028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.729073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.729167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.729212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.729317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.729348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.729447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.729476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.729594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.729626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.729747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.729791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.729895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.729939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.730027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.730054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.730136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.730169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.730264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.730292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.730382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.730408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.730498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.730531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.730624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.730651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.730743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.730771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.731852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.731898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.731983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.732011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.732090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.732116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.732208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.086 [2024-07-15 21:47:27.732235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.086 qpair failed and we were unable to recover it. 00:25:37.086 [2024-07-15 21:47:27.732322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.732347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.732430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.732457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.732544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.732571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.732661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.732688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.732761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.732787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.732867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.732894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.732978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.733004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.733096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.733122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.733218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.733245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.733329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.733355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.733445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.733474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.733566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.733596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.733695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.733725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.733832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.733876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.733962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.733989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.734086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.734127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.734240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.734272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.734422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.734488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.734602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.734671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.734810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.734836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.734927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.734956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.735034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.735061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.735151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.735180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.735273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.735299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.735393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.735420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.735499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.735526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.735609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.735636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.735742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.735769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.735852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.735878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.735961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.735988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.736077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.087 [2024-07-15 21:47:27.736105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.087 qpair failed and we were unable to recover it. 00:25:37.087 [2024-07-15 21:47:27.736197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.736236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.736337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.736366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.736454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.736487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.736576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.736605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.736689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.736717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.736820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.736867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.736981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.737012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.737107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.737136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.737250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.737277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.737353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.737380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.737459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.737486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.737559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.737586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.737677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.737702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.737810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.737835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.738629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.738675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.738778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.738804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.738903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.738930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.739018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.739046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.739124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.739164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.739246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.739273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.739373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.739405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.739493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.739520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.739600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.739627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.739717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.739744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.739826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.739853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.739956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.739986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.740087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.740117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.740238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.740271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.740368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.088 [2024-07-15 21:47:27.740395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.088 qpair failed and we were unable to recover it. 00:25:37.088 [2024-07-15 21:47:27.740485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.740517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.740605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.740633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.740718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.740744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.740825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.740852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.740985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.741024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.741126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.741161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.741249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.741287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.741388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.741417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.741549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.741591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.741704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.741731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.741820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.741848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.741940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.741968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.742059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.742085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.742163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.742189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.742269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.742295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.742408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.742452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.742556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.742600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.742679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.742705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.742789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.742815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.742915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.742944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.743027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.743054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.743131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.743165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.743246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.743272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.743370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.743396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.743485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.743510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.743586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.743612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.743693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.743724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.743804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.743835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.743914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.743940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.744020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.744045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.744129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.744166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.744248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.089 [2024-07-15 21:47:27.744274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.089 qpair failed and we were unable to recover it. 00:25:37.089 [2024-07-15 21:47:27.744357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.744384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.744467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.744493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.744585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.744617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.744710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.744736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.744817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.744843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.744923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.744950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.745032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.745060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.745148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.745175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.745255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.745281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.745369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.745399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.745478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.745506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.745587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.745613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.745697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.745722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.745797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.745822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.745923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.745966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.746045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.746071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.746154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.746181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.746266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.746292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.746371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.746397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.746472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.746498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.746580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.746605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.746683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.746711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.746800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.746828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.746919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.746949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.747026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.747053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.747144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.747173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.747254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.747280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.747376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.747404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.747482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.747509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.747594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.747622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.747711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.747739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.747835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.747864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.747943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.747969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.090 qpair failed and we were unable to recover it. 00:25:37.090 [2024-07-15 21:47:27.748071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.090 [2024-07-15 21:47:27.748097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.748199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.748239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.748318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.748348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.748426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.748451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.748535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.748560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.748637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.748662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.748740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.748766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.748858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.748884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.748971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.748997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.749084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.749112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.749206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.749235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.749332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.749365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.749458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.749485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.749574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.749602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.749688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.749716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.749796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.749823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.749906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.749933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.750011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.750037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.750131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.750164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.750260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.750290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.750382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.750408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.750516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.750558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.750660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.750691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.750785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.750811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.750919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.750960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.751056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.751086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.751212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.751252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.751351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.091 [2024-07-15 21:47:27.751380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.091 qpair failed and we were unable to recover it. 00:25:37.091 [2024-07-15 21:47:27.751498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.751540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.751643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.751694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.751797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.751841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.751959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.751989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.752100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.752131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.752248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.752289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.752404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.752437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.752535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.752559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.752642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.752680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.752777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.752816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.752906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.752932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.753017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.753056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.753151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.753177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.753264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.753290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.753385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.753412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.753514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.753541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.753643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.753686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.753803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.753833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.753928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.753955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.754050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.754077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.754171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.754197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.754293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.754323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.754426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.754456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.754562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.754592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.754707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.754749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.754854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.754897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.754994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.755024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.755135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.755184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.755266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.755298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.755398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.755429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.755540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.755569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.755674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.755704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.755818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.755850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.755945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.755972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.092 [2024-07-15 21:47:27.756051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.092 [2024-07-15 21:47:27.756080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.092 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.756176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.756205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.756313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.756357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.757282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.757315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.757426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.757487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.757588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.757620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.757718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.757744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.757823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.757849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.757941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.757968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.758064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.758092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.758181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.758210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.758295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.758321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.758400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.758425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.758506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.758531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.758611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.758637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.758717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.758743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.758821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.758846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.758928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.758954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.759032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.759058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.759129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.759175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.759267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.759295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.759395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.759421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.759508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.759533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.759620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.759661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.759761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.759801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.759897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.759928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.760021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.760049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.760146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.760176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.760263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.760290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.760374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.760401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.760487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.760514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.760593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.760619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.760710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.760738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.760819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.760845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.760926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.760956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.761039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.761068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.093 [2024-07-15 21:47:27.761162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.093 [2024-07-15 21:47:27.761191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.093 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.761285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.761313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.761393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.761419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.761499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.761525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.761610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.761635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.761710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.761735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.761814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.761840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.761924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.761950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.762026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.762052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.762129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.762162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.762244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.762270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.762355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.762380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.762469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.762496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.762574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.762601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.762700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.762726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.762806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.762832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.762907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.762932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.763009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.763034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.763107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.763132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.763228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.763259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.763342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.763369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.763446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.763472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.763547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.763573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.763656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.763682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.763759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.763784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.763865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.763891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.763969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.763995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.764070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.764097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.764189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.764217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.764309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.764336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.764426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.764452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.764532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.764557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.764646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.764674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.764763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.764792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.094 [2024-07-15 21:47:27.764875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.094 [2024-07-15 21:47:27.764901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.094 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.764991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.765023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.765109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.765136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.765232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.765258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.765345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.765375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.765456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.765484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.765574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.765601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.765687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.765714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.765793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.765819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.765913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.765943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.766053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.766083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.766181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.766207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.766297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.766327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.766434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.766464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.766552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.766578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.766652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.766678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.766761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.766787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.766866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.766891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.766972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.766998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.767077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.767102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.767203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.767229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.767304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.767329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.767407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.767432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.767517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.767546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.767634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.767662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.767747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.767774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.767851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.767877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.767954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.767980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.768069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.768097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.768186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.768214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.768301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.768326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.768411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.768442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.768524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.768550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.768636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.768662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.768744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.768771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.768852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.768880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.768958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.768983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.095 qpair failed and we were unable to recover it. 00:25:37.095 [2024-07-15 21:47:27.769061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.095 [2024-07-15 21:47:27.769087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.769172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.769199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.769275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.769301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.769377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.769403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.769485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.769511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.769596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.769622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.769698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.769723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.769803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.769828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.769909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.769936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.770017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.770042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.770122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.770155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.770238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.770263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.770358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.770384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.770464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.770489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.770574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.770599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.770675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.770700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.770788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.770817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.770907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.770936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.771021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.771048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.771136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.771169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.771251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.771277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.771359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.771392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.771481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.771510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.771594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.771619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.771697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.771722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.771801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.771826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.771910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.771936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.772024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.772050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.772128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.096 [2024-07-15 21:47:27.772162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.096 qpair failed and we were unable to recover it. 00:25:37.096 [2024-07-15 21:47:27.772254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-15 21:47:27.772282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-15 21:47:27.772370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-15 21:47:27.772397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-15 21:47:27.772480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-15 21:47:27.772504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-15 21:47:27.772590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-15 21:47:27.772617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-15 21:47:27.772708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-15 21:47:27.772735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-15 21:47:27.772826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-15 21:47:27.772853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-15 21:47:27.772946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-15 21:47:27.772973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-15 21:47:27.773054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-15 21:47:27.773080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-15 21:47:27.773184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-15 21:47:27.773213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-15 21:47:27.773294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-15 21:47:27.773320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-15 21:47:27.773405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-15 21:47:27.773431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-15 21:47:27.773512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-15 21:47:27.773539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-15 21:47:27.773623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-15 21:47:27.773650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-15 21:47:27.773737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-15 21:47:27.773767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-15 21:47:27.773849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-15 21:47:27.773877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-15 21:47:27.773956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-15 21:47:27.773982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-15 21:47:27.774063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-15 21:47:27.774090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-15 21:47:27.774179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-15 21:47:27.774207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-15 21:47:27.774290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-15 21:47:27.774316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-15 21:47:27.774408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-15 21:47:27.774436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-15 21:47:27.774518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-15 21:47:27.774543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.097 qpair failed and we were unable to recover it. 00:25:37.097 [2024-07-15 21:47:27.774626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.097 [2024-07-15 21:47:27.774652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.774734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.774761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.774854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.774881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.774972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.775002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.775086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.775113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.775196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.775223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.775313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.775340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.775427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.775453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.775538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.775563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.775645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.775672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.775759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.775788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.775883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.775912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.776010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.776036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.776121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.776153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.776246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.776273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.776353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.776379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.776460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.776487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.776563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.776589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.776675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.776701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.776779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.776805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.776887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.776914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.776991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.777017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.777101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.777128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.777221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.777247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.777327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.777354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.777447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.777473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.777557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.777583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.777664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.777690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.777779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.777805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.777895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.777924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.778022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.778051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.778136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.778170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.778249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.778275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.778361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.778388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.098 [2024-07-15 21:47:27.778474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.098 [2024-07-15 21:47:27.778500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.098 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.778587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.778614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.778702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.778728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.778810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.778837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.778921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.778954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.779035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.779063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.779149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.779177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.779277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.779306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.779416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.779444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.779534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.779560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.779646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.779671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.779773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.779815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.779903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.779931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.780064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.780090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.780217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.780245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.780328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.780355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.780447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.780476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.780555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.780581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.780687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.780745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.780825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.780852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.780944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.780973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.781067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.781092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.781186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.781214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.781291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.781316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.781394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.781420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.781502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.781527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.781604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.781629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.781709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.781735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.781817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.781844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.781930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.781960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.782047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.782074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.782158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.782187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.782273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.782299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.782380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.782406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.782487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.782514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.782600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.782626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.782709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.782738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.782823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.782849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.782937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.782965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.783048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.783076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.783160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.783189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.099 qpair failed and we were unable to recover it. 00:25:37.099 [2024-07-15 21:47:27.783278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.099 [2024-07-15 21:47:27.783306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.783384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.783410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.783487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.783513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.783596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.783621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.783703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.783728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.783807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.783832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.783907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.783933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.784011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.784036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.784111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.784144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.784221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.784247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.784338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.784367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.784451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.784478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.784558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.784584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.784662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.784688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.784770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.784796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.784886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.784914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.785005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.785031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.785114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.785149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.785254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.785297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.785381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.785407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.785486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.785511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.785589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.785615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.785700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.785725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.785808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.785834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.785914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.785942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.786031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.786059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.786149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.786177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.786261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.786287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.786362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.786388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.786484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.786514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.786621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.786656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.786754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.786781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.786856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.786882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.786965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.786990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.787073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.787099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.787193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.787222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.787299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.787325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.787416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.787444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.787549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.787578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.787694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.787736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.787813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.100 [2024-07-15 21:47:27.787838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.100 qpair failed and we were unable to recover it. 00:25:37.100 [2024-07-15 21:47:27.787935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.787965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.788059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.788086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.788177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.788208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.788328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.788387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.788491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.788534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.788613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.788639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.788737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.788766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.788880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.788909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.789004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.789030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.789129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.789163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.789241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.789267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.789362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.789392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.789486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.789512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.789596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.789621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.789704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.789733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.789822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.789851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.789956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.790005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.790090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.790117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.790255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.790322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.790434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.790463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.790564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.790592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.790672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.790699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.790819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.790873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.790955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.790982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.791067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.791093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.791180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.791207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.791287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.791313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.791391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.791416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.791502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.791530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.791617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.791643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.791724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.791752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.791832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.791860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.101 [2024-07-15 21:47:27.791991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.101 [2024-07-15 21:47:27.792029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.101 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.792113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.792144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.792221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.792246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.792330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.792357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.792437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.792463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.792549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.792576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.792656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.792682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.792758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.792784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.792861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.792886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.792974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.793000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.793081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.793108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.793206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.793235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.793364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.793391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.793468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.793493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.793572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.793597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.793686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.793712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.793941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.793968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.794049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.794077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.794164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.794193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.794286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.794316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.794433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.794462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.794604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.794644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.794720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.794746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.794828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.794856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.794948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.794983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.795156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.795217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.795307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.795337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.795446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.795475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.795580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.795609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.795702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.795728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.795803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.795828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.795914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.795941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.796018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.796044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.796131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.796164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.796246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.796272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.796373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.796402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.796630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.796669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.796745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.796771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.796856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.796882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.796977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.797006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.797099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.102 [2024-07-15 21:47:27.797125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.102 qpair failed and we were unable to recover it. 00:25:37.102 [2024-07-15 21:47:27.797215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.797242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.797321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.797347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.797433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.797459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.797545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.797574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.797662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.797689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.797775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.797803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.797883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.797909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.798008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.798040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.798129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.798162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.798247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.798275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.798379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.798424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.798522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.798555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.798649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.798676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.798756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.798782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.798880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.798911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.799021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.799082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.799164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.799192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.799278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.799306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.799393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.799421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.799507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.799535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.799612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.799638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.799714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.799740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.799824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.799851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.799927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.799952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.800034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.800062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.800145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.800172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.800250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.800276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.800357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.800384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.800460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.800486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.800571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.800599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.800676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.800702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.800784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.800811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.800894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.800921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.801001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.801028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.801112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.801145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.801234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.801261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.801347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.801375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.801461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.801487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.801563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.801589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.801671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.801698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.801780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.103 [2024-07-15 21:47:27.801806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.103 qpair failed and we were unable to recover it. 00:25:37.103 [2024-07-15 21:47:27.801894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.801921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.802005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.802034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.802163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.802191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.802275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.802303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.802386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.802411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.802484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.802509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.802611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.802652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.802744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.802773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.802865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.802891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.802973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.803003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.803083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.803109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.803198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.803227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.803356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.803382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.803457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.803483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.803559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.803584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.803664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.803690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.803766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.803792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.803873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.803900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.803976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.804002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.804091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.804120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.804208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.804235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.804317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.804344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.804423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.804449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.804552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.804583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.804678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.804704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.804790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.804819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.804911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.804938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.805016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.805042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.805170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.805197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.805276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.805303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.805392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.805419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.805502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.805529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.805614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.805641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.805724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.805749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.805827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.805852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.805938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.805964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.104 [2024-07-15 21:47:27.806050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.104 [2024-07-15 21:47:27.806078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.104 qpair failed and we were unable to recover it. 00:25:37.105 [2024-07-15 21:47:27.806162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.105 [2024-07-15 21:47:27.806191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.105 qpair failed and we were unable to recover it. 00:25:37.105 [2024-07-15 21:47:27.806281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.105 [2024-07-15 21:47:27.806308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.105 qpair failed and we were unable to recover it. 00:25:37.105 [2024-07-15 21:47:27.806396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.105 [2024-07-15 21:47:27.806423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.105 qpair failed and we were unable to recover it. 00:25:37.105 [2024-07-15 21:47:27.806514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.105 [2024-07-15 21:47:27.806540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.105 qpair failed and we were unable to recover it. 00:25:37.105 [2024-07-15 21:47:27.806624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.105 [2024-07-15 21:47:27.806650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.105 qpair failed and we were unable to recover it. 00:25:37.105 [2024-07-15 21:47:27.806733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.105 [2024-07-15 21:47:27.806759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.105 qpair failed and we were unable to recover it. 00:25:37.105 [2024-07-15 21:47:27.806840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.105 [2024-07-15 21:47:27.806866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.105 qpair failed and we were unable to recover it. 00:25:37.105 [2024-07-15 21:47:27.806946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.105 [2024-07-15 21:47:27.806971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.105 qpair failed and we were unable to recover it. 00:25:37.105 [2024-07-15 21:47:27.807046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.105 [2024-07-15 21:47:27.807071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.105 qpair failed and we were unable to recover it. 00:25:37.105 [2024-07-15 21:47:27.807158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.105 [2024-07-15 21:47:27.807184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.105 qpair failed and we were unable to recover it. 00:25:37.105 [2024-07-15 21:47:27.807263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.105 [2024-07-15 21:47:27.807289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.105 qpair failed and we were unable to recover it. 00:25:37.105 [2024-07-15 21:47:27.807366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.105 [2024-07-15 21:47:27.807392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.105 qpair failed and we were unable to recover it. 00:25:37.105 [2024-07-15 21:47:27.807470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.105 [2024-07-15 21:47:27.807496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.105 qpair failed and we were unable to recover it. 00:25:37.105 [2024-07-15 21:47:27.807579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.105 [2024-07-15 21:47:27.807605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.105 qpair failed and we were unable to recover it. 00:25:37.105 [2024-07-15 21:47:27.807686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.105 [2024-07-15 21:47:27.807711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.105 qpair failed and we were unable to recover it. 00:25:37.105 [2024-07-15 21:47:27.807795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.105 [2024-07-15 21:47:27.807823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.105 qpair failed and we were unable to recover it. 00:25:37.105 [2024-07-15 21:47:27.807907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.105 [2024-07-15 21:47:27.807935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.105 qpair failed and we were unable to recover it. 00:25:37.105 [2024-07-15 21:47:27.808016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.105 [2024-07-15 21:47:27.808042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.105 qpair failed and we were unable to recover it. 00:25:37.105 [2024-07-15 21:47:27.808131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.105 [2024-07-15 21:47:27.808162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.808243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.808269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.808362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.808393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.808492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.808523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.808623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.808651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.808804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.808830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.808919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.808947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.809032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.809058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.809149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.809176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.809254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.809279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.809377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.809406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.809495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.809521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.809598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.809623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.809711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.809740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.809827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.809853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.809931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.809956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.810040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.810067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.810158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.810185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.810272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.810298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.810388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.810415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.810507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.810534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.810618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.810646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.810740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.810768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.810849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.810876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.810957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.810983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.811068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.811093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.811201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.811229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.811345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.811372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.811462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.811488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.811580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.811607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.811714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.811756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.811856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.811887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.811984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.812010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.812091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.812118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.397 [2024-07-15 21:47:27.812222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.397 [2024-07-15 21:47:27.812250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.397 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.812375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.812435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.812529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.812556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.812643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.812668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.812752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.812778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.812859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.812887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.812970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.812996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.813075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.813100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.813189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.813215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.813414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.813456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.813542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.813568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.813668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.813695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.813803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.813831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.813937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.813965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.814058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.814088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.814176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.814202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.814291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.814319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.814416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.814445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.814531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.814558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.814655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.814684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.814775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.814800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.814876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.814902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.814997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.815025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.815115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.815147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.815239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.815267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.815353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.815379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.815458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.815483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.815572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.815598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.815683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.815709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.815791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.815817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.815900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.815927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.816001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.816027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.816105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.816131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.816217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.816244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.816323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.816348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.816435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.816463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.816550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.816577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.816676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.816704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.816792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.816817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.816914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.816941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.817036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.398 [2024-07-15 21:47:27.817062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.398 qpair failed and we were unable to recover it. 00:25:37.398 [2024-07-15 21:47:27.817142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.817172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.817253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.817279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.817361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.817387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.817468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.817494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.817577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.817605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.817682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.817708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.817784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.817809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.817906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.817933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.818032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.818059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.818155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.818183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.818264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.818291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.818379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.818405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.818496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.818524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.818613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.818640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.818741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.818769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.818861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.818887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.818978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.819005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.819112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.819148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.819260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.819288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.819398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.819428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.819520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.819546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.819648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.819675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.819767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.819793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.819891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.819919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.820016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.820042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.820136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.820170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.820283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.820312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.820409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.820437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.820542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.820570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.820666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.820693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.820781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.820808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.820886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.820911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.820988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.821014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.821103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.821128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.821217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.821243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.821326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.821352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.821439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.821467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.821549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.821578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.821657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.821683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.821762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.821788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.399 [2024-07-15 21:47:27.821864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.399 [2024-07-15 21:47:27.821893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.399 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.821995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.822023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.822122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.822155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.822241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.822266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.822341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.822367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.822451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.822479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.822614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.822655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.822731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.822757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.822860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.822902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.822998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.823027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.823119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.823156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.823252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.823280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.823395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.823436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.823523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.823549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.823642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.823668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.823746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.823772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.823852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.823878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.823963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.823990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.824077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.824104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.824196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.824223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.824318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.824345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.824453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.824480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.824593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.824621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.824715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.824740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.824820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.824846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.824924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.824949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.825033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.825058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.825149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.825183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.825268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.825294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.825371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.825398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.825478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.825508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.825585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.825612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.825695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.825723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.825800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.825828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.825924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.825959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.826047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.826076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.826162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.826206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.826312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.826338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.826438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.826508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.826634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.826693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.826800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.826860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.400 qpair failed and we were unable to recover it. 00:25:37.400 [2024-07-15 21:47:27.826995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.400 [2024-07-15 21:47:27.827062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.827154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.827181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.827262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.827290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.827372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.827398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.827476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.827502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.827601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.827628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.827719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.827745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.827828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.827857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.827939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.827969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.828054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.828082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.828171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.828198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.828294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.828321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.828407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.828434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.828512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.828538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.828615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.828641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.828727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.828756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.828842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.828868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.828951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.828979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.829067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.829097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.829205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.829248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.829361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.829421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.829502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.829528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.829629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.829688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.829775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.829802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.829882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.829908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.829987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.830013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.830089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.830115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.830204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.830231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.401 qpair failed and we were unable to recover it. 00:25:37.401 [2024-07-15 21:47:27.830315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.401 [2024-07-15 21:47:27.830341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.830425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.830451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.830528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.830553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.830629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.830655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.830732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.830758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.830833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.830858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.830937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.830963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.831092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.831120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.831211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.831239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.831324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.831351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.831431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.831458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.831534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.831560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.831645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.831671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.831753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.831779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.831859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.831884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.831961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.831986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.832063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.832090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.832186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.832219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.832308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.832335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.832466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.832546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.832629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.832655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.832731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.832756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.832840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.832869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.832944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.832971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.833057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.833084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.833170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.833202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.833300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.833329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.833431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.833461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.833539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.833566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.833649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.833675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.833772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.833831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.833951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.833977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.834055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.834080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.834162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.834189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.834273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.834299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.834378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.834404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.834485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.834511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.834591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.834619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.834734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.834763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.834859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.402 [2024-07-15 21:47:27.834887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.402 qpair failed and we were unable to recover it. 00:25:37.402 [2024-07-15 21:47:27.834979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.835006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.835084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.835109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.835203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.835230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.835305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.835330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.835406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.835430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.835553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.835580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.835686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.835713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.835820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.835851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.835937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.835966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.836060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.836087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.836193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.836222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.836339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.836367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.836466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.836511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.836625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.836653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.836756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.836784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.836876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.836902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.836976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.837002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.837077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.837103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.837194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.837222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.837308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.837335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.837417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.837442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.837528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.837555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.837630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.837661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.837746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.837775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.837856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.837883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.837965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.837992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.838089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.838115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.838206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.838233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.838365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.838393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.838478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.838505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.838605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.838670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.838789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.838851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.838975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.839003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.839088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.839114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.839274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.839333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.839453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.839513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.403 [2024-07-15 21:47:27.839661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.403 [2024-07-15 21:47:27.839700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.403 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.839794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.839825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.839974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.840022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.840174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.840220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.840334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.840363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.840481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.840547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.840718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.840777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.840896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.840924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.841080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.841153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.841266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.841318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.841501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.841559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.841662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.841717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.841828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.841890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.842013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.842042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.842152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.842193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.842329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.842384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.842535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.842595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.842691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.842720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.842872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.842927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.843017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.843045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.843173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.843200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.843283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.843311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.843418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.843459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.843553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.843581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.843690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.843718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.843837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.843867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.844014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.844056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.844174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.844202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.844278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.844304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.844382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.844408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.844495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.844521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.844609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.844635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.844759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.844785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.844901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.844927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.845023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.845054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.845201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.845232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.845332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.845360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.845475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.845514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.845591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.845617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.845690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.404 [2024-07-15 21:47:27.845716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.404 qpair failed and we were unable to recover it. 00:25:37.404 [2024-07-15 21:47:27.845871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.845923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.846042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.846094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.846222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.846281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.846409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.846441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.846564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.846624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.846766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.846806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.846882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.846908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.847010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.847040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.847135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.847168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.847267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.847295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.847401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.847429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.847532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.847559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.847682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.847722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.847823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.847851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.847943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.847969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.848048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.848074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.848161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.848187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.848283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.848311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.848442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.848468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.848564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.848589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.848692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.848719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.848816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.848842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.848922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.848948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.849029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.849055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.849157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.849184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.849281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.849309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.849404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.849430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.849553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.849603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.849715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.849745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.849858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.849888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.849987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.850018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.850097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.850124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.850211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.850238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.850340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.850367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.850456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.405 [2024-07-15 21:47:27.850484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.405 qpair failed and we were unable to recover it. 00:25:37.405 [2024-07-15 21:47:27.850584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.850610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.850704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.850733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.850819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.850845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.850922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.850947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.851032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.851060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.851161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.851230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.851341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.851389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.851515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.851569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.851694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.851722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.851862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.851912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.851992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.852018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.852186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.852212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.852330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.852385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.852514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.852555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.852658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.852686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.852832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.852873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.852970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.852998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.853089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.853115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.853217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.853245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.853356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.853396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.853495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.853522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.853637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.853666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.853785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.853812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.853933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.853997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.854182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.854210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.854310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.854372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.854490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.854518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.854613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.854655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.854772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.854803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.854899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.854927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.855014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.855040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.855118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.855149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.855252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.855278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.406 qpair failed and we were unable to recover it. 00:25:37.406 [2024-07-15 21:47:27.855366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.406 [2024-07-15 21:47:27.855392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.855504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.855530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.855633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.855667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.855751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.855778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.855876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.855904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.855989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.856015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.856094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.856119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.856227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.856255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.856364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.856389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.856469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.856495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.856576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.856603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.856685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.856712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.856793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.856818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.856895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.856920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.856997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.857023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.857104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.857129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.857217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.857243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.857325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.857351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.857433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.857462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.857547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.857573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.857664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.857690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.857771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.857798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.857886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.857914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.857995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.858023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.858111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.858137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.858225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.858251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.858355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.858383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.858491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.858517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.858613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.858678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.858800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.858832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.858965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.859004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.859088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.859114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.859226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.859268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.859351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.859378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.859456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.859482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.859582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.859613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.407 [2024-07-15 21:47:27.859692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.407 [2024-07-15 21:47:27.859718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.407 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.859806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.859834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.859912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.859939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.860029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.860057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.860152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.860195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.860273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.860300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.860395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.860421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.860505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.860531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.860615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.860642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.860724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.860750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.860831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.860858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.860936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.860962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.861039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.861064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.861143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.861169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.861248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.861273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.861367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.861392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.861477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.861506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.861591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.861619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.861708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.861735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.861826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.861854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.861954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.861982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.862060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.862086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.862183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.862211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.862291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.862317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.862402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.862428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.862508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.862533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.862619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.862646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.862742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.862770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.862875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.862902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.863010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.863040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.863143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.863171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.863250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.863276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.863358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.863386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.863473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.863507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.863593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.863620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.863702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.863730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.863806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.408 [2024-07-15 21:47:27.863832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.408 qpair failed and we were unable to recover it. 00:25:37.408 [2024-07-15 21:47:27.863905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.863931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.864008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.864033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.864117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.864151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.864240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.864269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.864372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.864414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.864512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.864541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.864634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.864661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.864749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.864778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.864858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.864885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.864963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.864988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.865090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.865115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.865205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.865231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.865330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.865361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.865454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.865481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.865644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.865672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.865755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.865781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.865862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.865888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.865967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.865993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.866079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.866107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.866205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.866234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.866331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.866358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.866438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.866464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.866544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.866571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.866645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.866675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.866765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.866792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.866880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.866907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.867010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.867051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.867129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.867164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.867260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.867286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.867376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.867403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.867503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.867551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.867667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.867724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.867833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.867888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.868028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.868069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.868177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.868210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.868310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.868341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.409 qpair failed and we were unable to recover it. 00:25:37.409 [2024-07-15 21:47:27.868457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.409 [2024-07-15 21:47:27.868485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.868599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.868625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.868713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.868741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.868864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.868913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.869039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.869104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.869237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.869298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.869416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.869455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.869597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.869627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.869727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.869755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.869882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.869924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.870017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.870043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.870126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.870158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.870256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.870284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.870376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.870402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.870518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.870570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.870657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.870682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.870784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.870812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.870926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.870954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.871053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.871079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.871186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.871213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.871292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.871317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.871402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.871427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.871511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.871540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.871624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.871650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.871738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.871765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.871850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.871877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.871951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.871977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.872064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.872097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.872193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.872221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.872312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.872340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.872430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.872455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.872540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.872567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.410 qpair failed and we were unable to recover it. 00:25:37.410 [2024-07-15 21:47:27.872649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.410 [2024-07-15 21:47:27.872675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.872757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.872782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.872865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.872891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.872976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.873009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.873094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.873121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.873214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.873245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.873347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.873412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.873521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.873580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.873702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.873746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.873873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.873923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.874046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.874088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.874190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.874220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.874315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.874373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.874547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.874593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.874704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.874762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.874876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.874946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.875055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.875083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.875188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.875246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.875365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.875438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.875556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.875609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.875794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.875855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.875962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.876016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.876168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.876212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.876325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.876357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.876459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.876488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.876580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.876606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.876693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.876719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.876810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.876847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.876963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.877000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.877101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.877144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.877243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.877273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.877360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.877389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.877487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.877518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.877605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.877632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.411 [2024-07-15 21:47:27.877732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.411 [2024-07-15 21:47:27.877761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.411 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.877848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.877879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.877963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.877990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.878080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.878107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.878218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.878245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.878333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.878360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.878449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.878475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.878564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.878593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.878676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.878702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.878783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.878809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.878911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.878978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.879103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.879178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.879302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.879355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.879479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.879517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.879615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.879669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.879803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.879857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.879991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.880064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.880194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.880220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.880320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.880384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.880501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.880576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.880759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.880818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.880930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.881001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.881124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.881197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.881318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.881343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.881461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.881487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.881595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.881649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.881769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.881818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.881941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.881969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.882088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.882121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.882241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.882271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.882386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.882417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.882531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.882585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.882717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.882742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.882851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.882905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.883027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.883100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.883236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.883278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.883433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.883493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.883663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.883721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.883837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.883905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.884025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.884083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.884213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.884253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.884355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.412 [2024-07-15 21:47:27.884419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.412 qpair failed and we were unable to recover it. 00:25:37.412 [2024-07-15 21:47:27.884534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.884593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.884708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.884764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.884879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.884930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.885054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.885125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.885262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.885288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.885414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.885471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.885602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.885655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.885765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.885833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.885974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.886016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.886161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.886223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.886355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.886424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.886528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.886587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.886704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.886743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.886848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.886887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.886999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.887038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.887146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.887205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.887328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.887380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.887566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.887626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.887769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.887826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.887940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.887969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.888075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.888116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.888219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.888262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.888407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.888433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.888545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.888584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.888689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.888722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.888827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.888881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.889003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.889049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.889132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.889165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.889263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.889292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.889388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.889415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.889538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.889600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.889716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.889773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.889884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.889939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.890065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.890104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.890215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.890241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.890319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.413 [2024-07-15 21:47:27.890345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.413 qpair failed and we were unable to recover it. 00:25:37.413 [2024-07-15 21:47:27.890492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.890550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.890669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.890739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.890851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.890907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.891033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.891073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.891221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.891306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.891420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.891474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.891562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.891589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.891691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.891732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.891816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.891850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.891948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.891977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.892057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.892084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.892176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.892209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.892298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.892325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.892402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.892429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.892532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.892559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.892651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.892678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.892777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.892805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.892899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.892926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.893009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.893036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.893116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.893149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.893248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.893306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.893434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.893459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.893570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.893596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.893714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.893739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.893834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.893917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.894040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.894067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.894164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.894216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.894342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.894414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.894526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.894564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.894663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.894721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.894842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.894899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.895013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.895074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.895220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.895291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.895418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.895444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.895548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.895619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.895738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.895785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.895910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.895970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.896088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.896190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.896301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.896362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.896472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.896510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.414 qpair failed and we were unable to recover it. 00:25:37.414 [2024-07-15 21:47:27.896602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.414 [2024-07-15 21:47:27.896663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.896778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.896850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.896967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.897020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.897145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.897206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.897335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.897396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.897520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.897546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.897656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.897695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.897807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.897845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.897945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.897974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.898060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.898111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.898258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.898320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.898433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.898485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.898608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.898672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.898796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.898856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.898980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.899005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.899101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.899171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.899298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.899360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.899478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.899547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.899710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.899794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.899926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.899952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.900111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.900159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.900258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.900297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.900406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.900479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.900605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.900662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.900783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.900878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.901038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.901098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.901307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.901368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.901459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.901487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.901593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.901636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.901737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.901778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.901880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.901943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.902061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.902135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.902281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.902309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.902417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.415 [2024-07-15 21:47:27.902481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.415 qpair failed and we were unable to recover it. 00:25:37.415 [2024-07-15 21:47:27.902599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.902645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.902768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.902827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.902942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.903002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.903132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.903177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.903284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.903353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.903472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.903541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.903654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.903711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.903835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.903876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.903995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.904021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.904134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.904178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.904315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.904376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.904497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.904566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.904693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.904744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.904872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.904936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.905065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.905095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.905204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.905246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.905354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.905387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.905480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.905506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.905592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.905625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.905720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.905747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.905827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.905853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.905930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.905963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.906044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.906070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.906162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.906189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.906287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.906320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.906403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.906429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.906502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.906528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.906611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.906646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.906729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.906756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.906840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.906866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.906967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.907000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.907080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.907107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.907205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.907235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.907342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.907370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.907460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.907487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.907567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.907597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.907691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.907722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.907811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.907845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.907929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.907958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.908042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.908068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.416 qpair failed and we were unable to recover it. 00:25:37.416 [2024-07-15 21:47:27.908162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.416 [2024-07-15 21:47:27.908227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.908345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.908405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.908538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.908595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.908702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.908755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.908893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.908947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.909069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.909136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.909279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.909333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.909465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.909519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.909628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.909687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.909805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.909864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.909993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.910060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.910204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.910236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.910335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.910376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.910475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.910539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.910653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.910719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.910835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.910894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.911017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.911085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.911229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.911283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.911403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.911443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.911536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.911586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.911707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.911737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.911835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.911898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.912035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.912090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.912224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.912267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.912387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.912443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.912558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.912619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.912759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.912846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.912978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.913028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.913163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.913208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.913323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.913376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.913467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.913495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.913571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.913597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.913674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.913701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.913798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.913825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.913903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.913930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.914021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.914050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.914144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.914176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.914260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.914287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.914370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.914396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.914490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.914518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.914600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.914627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.417 qpair failed and we were unable to recover it. 00:25:37.417 [2024-07-15 21:47:27.914724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.417 [2024-07-15 21:47:27.914782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.914904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.914948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.915088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.915191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.915350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.915434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.915580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.915664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.915769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.915830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.915934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.915979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.916061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.916087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.916177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.916205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.916289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.916315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.916403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.916430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.916516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.916542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.916625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.916655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.916754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.916813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.916894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.916922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.917012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.917038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.917114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.917148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.917246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.917274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.917361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.917388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.917465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.917497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.917593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.917620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.917709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.917735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.917821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.917852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.917938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.917964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.918052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.918081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.918182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.918217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.918324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.918395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.918532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.918587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.918706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.918770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.918887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.918953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.919098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.919167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.919283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.919321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.919423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.919484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.919607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.919671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.919788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.919846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.919975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.920006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.920159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.920228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.920349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.920406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.920523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.920562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.920658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.920719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.920834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.920889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.418 qpair failed and we were unable to recover it. 00:25:37.418 [2024-07-15 21:47:27.921021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.418 [2024-07-15 21:47:27.921078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.921216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.921279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.921389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.921428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.921539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.921592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.921715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.921784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.921915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.921941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.922099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.922130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.922294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.922351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.922493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.922540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.922653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.922684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.922795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.922826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.922932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.922963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.923084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.923157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.923266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.923326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.923431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.923481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.923576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.923604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.923683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.923709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.923790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.923821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.923906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.923932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.924022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.924050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.924153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.924182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.924273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.924302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.924384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.924411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.924507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.924538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.924633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.924659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.924738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.924764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.924851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.924877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.924960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.924988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.925077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.925104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.925204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.925231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.925317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.925343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.925429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.925454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.925546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.925574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.925669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.925696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.925772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.419 [2024-07-15 21:47:27.925802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.419 qpair failed and we were unable to recover it. 00:25:37.419 [2024-07-15 21:47:27.925907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.925949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.926029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.926055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.926143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.926170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.926255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.926282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.926386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.926429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.926512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.926538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.926622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.926648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.926750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.926793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.926874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.926899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.926985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.927014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.927089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.927115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.927230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.927277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.927403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.927431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.927537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.927568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.927683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.927743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.927826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.927852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.927936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.927962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.928046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.928072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.928163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.928190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.928272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.928298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.928381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.928406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.928482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.928508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.928593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.928618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.928707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.928734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.928823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.928849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.928933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.928958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.929045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.929074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.929158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.929205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.929329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.929389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.929526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.929581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.929688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.929742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.929853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.929916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.930037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.930098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.930229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.930255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.930365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.930433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.930555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.930583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.930677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.930707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.930815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.930845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.930954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.930984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.931093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.931128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.931264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.420 [2024-07-15 21:47:27.931316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.420 qpair failed and we were unable to recover it. 00:25:37.420 [2024-07-15 21:47:27.931419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.931461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.931563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.931595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.931713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.931756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.931868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.931935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.932059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.932120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.932265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.932325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.932444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.932503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.932633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.932659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.932762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.932820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.932938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.932993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.933119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.933153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.933290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.933343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.933459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.933522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.933638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.933697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.933823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.933898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.934028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.934088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.934261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.934325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.934459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.934512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.934657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.934729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.934882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.934934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.935046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.935077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.935206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.935240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.935344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.935374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.935484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.935514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.935612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.935639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.935720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.935750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.935835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.935861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.935945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.935970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.936045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.936070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.936164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.936191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.936276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.936303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.936392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.936418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.936501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.936526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.936627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.936685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.936805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.936870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.936982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.937032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.937183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.937227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.937349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.937393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.937498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.937556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.937663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.937716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.937813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.421 [2024-07-15 21:47:27.937871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.421 qpair failed and we were unable to recover it. 00:25:37.421 [2024-07-15 21:47:27.937958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.937986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.938075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.938101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.938191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.938217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.938295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.938321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.938404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.938431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.938515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.938541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.938622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.938648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.938725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.938751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.938829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.938855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.938936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.938963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.939042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.939068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.939167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.939209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.939311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.939341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.939423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.939450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.939536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.939564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.939646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.939672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.939748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.939774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.939860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.939887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.939968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.939994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.940082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.940108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.940202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.940230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.940314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.940340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.940428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.940454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.940540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.940567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.940649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.940679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.940774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.940800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.940882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.940909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.940997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.941023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.941110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.941145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.941252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.941296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.941423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.941476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.941602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.941657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.941777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.941806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.941898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.941927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.942021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.942049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.942150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.942194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.942275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.942302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.942383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.942410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.942502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.942529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.942612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.942638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.422 [2024-07-15 21:47:27.942721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.422 [2024-07-15 21:47:27.942748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.422 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.942833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.942859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.942944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.942970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.943057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.943083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.943171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.943198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.943280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.943306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.943394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.943420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.943504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.943531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.943614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.943641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.943718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.943743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.943828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.943854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.943943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.943970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.944057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.944084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.944173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.944203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.944294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.944321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.944413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.944440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.944543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.944583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.944682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.944761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.944897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.944927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.945024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.945080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.945162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.945190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.945298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.945356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.945477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.945507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.945606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.945655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.945776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.945839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.945963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.945992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.946128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.946208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.946346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.946375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.946481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.946534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.946654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.946709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.946790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.946816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.946899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.946929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.947022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.947049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.947143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.947170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.947250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.947285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.947378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.947405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.947489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.947514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.947594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.947621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.947722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.947750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.947833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.947859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.947938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.947964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.423 qpair failed and we were unable to recover it. 00:25:37.423 [2024-07-15 21:47:27.948049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.423 [2024-07-15 21:47:27.948076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.948163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.948190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.948275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.948301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.948386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.948411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.948495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.948520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.948600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.948627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.948709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.948734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.948822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.948864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.948967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.948996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.949093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.949119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.949216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.949243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.949323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.949350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.949436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.949461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.949539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.949581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.949675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.949732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.949849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.949908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.950023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.950081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.950206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.950256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.950390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.950437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.950560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.950619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.950743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.950769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.950874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.950931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.951056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.951081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.951221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.951277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.951404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.951430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.951526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.951582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.951702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.951759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.951875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.951924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.952039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.952078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.952214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.952261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.952376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.952456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.952598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.952627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.952720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.952763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.952842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.952868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.952967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.952997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.424 [2024-07-15 21:47:27.953106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.424 [2024-07-15 21:47:27.953134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.424 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.953235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.953276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.953364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.953391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.953469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.953495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.953582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.953608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.953683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.953725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.953837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.953868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.953957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.954007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.954122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.954185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.954299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.954329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.954442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.954491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.954606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.954664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.954780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.954835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.954955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.955003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.955123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.955160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.955288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.955321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.955430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.955461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.955582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.955624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.955702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.955728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.955826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.955856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.955967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.955997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.956108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.956156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.956262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.956304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.956404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.956434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.956534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.956565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.956690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.956744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.956861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.956913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.957015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.957070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.957207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.957237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.957325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.957351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.957435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.957461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.957557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.957589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.957695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.957755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.957876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.957926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.958047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.958093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.958230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.958261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.958354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.958409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.958524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.958577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.958716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.958748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.958895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.958927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.959091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.959119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.425 [2024-07-15 21:47:27.959224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.425 [2024-07-15 21:47:27.959267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.425 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.959369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.959399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.959495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.959520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.959596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.959622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.959728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.959769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.959870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.959900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.960017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.960072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.960230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.960262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.960378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.960423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.960556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.960600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.960715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.960761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.960887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.960938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.961068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.961095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.961211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.961257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.961367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.961410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.961491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.961519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.961617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.961643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.961741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.961772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.961893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.961951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.962029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.962056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.962147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.962175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.962259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.962285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.962365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.962391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.962480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.962510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.962594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.962620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.962696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.962722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.962822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.962849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.962930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.962961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.963041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.963069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.963157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.963183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.963268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.963322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.963448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.963500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.963619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.963677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.963791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.963843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.963953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.963999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.964126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.964207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.964315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.964363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.964485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.964535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.964674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.964717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.964845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.964872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.964974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.965017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.426 qpair failed and we were unable to recover it. 00:25:37.426 [2024-07-15 21:47:27.965125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.426 [2024-07-15 21:47:27.965176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.965280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.965310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.965401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.965427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.965530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.965573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.965674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.965704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.965819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.965861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.965937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.965963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.966052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.966078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.966169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.966195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.966294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.966324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.966438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.966468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.966564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.966591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.966677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.966704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.966793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.966822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.966902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.966929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.967012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.967038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.967121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.967152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.967240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.967268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.967353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.967380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.967484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.967523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.967640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.967672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.967768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.967795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.967880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.967907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.968010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.968052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.968136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.968173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.968263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.968289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.968387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.968432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.968521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.968548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.968637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.968663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.968774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.968800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.968918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.968944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.969047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.969103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.969228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.969269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.969371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.969423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.969553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.969597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.969729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.969784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.969908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.969970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.970102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.970191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.970326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.970374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.970508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.970551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.970666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.427 [2024-07-15 21:47:27.970717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.427 qpair failed and we were unable to recover it. 00:25:37.427 [2024-07-15 21:47:27.970829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.970881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.971018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.971064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.971194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.971220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.971348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.971393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.971511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.971566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.971697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.971743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.971864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.971919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.972037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.972083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.972222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.972274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.972402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.972442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.972585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.972643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.972761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.972790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.972873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.972904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.973000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.973031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.973149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.973192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.973278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.973304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.973386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.973413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.973502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.973529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.973615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.973641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.973726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.973752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.973834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.973860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.973940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.973965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.974054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.974080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.974163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.974190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.974286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.974312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.974393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.974420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.974503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.974529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.974609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.974635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.974717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.974744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.974826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.974852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.974951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.974977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.975075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.975105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.975198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.975225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.975334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.975378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.428 [2024-07-15 21:47:27.975456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.428 [2024-07-15 21:47:27.975482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.428 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.975578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.975608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.975737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.975789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.975871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.975898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.976001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.976049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.976160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.976188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.976286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.976317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.976414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.976443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.976532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.976560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.976640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.976667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.976752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.976777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.976860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.976886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.976985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.977018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.977125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.977162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.977269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.977313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.977396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.977423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.977541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.977591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.977689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.977719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.977837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.977885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.977986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.978014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.978116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.978171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.978278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.978321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.978422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.978452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.978557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.978587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.978695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.978743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.978871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.978930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.979041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.979098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.979224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.979271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.979403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.979432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.979547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.979595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.979708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.979763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.979849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.979875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.979965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.979992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.980083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.980111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.980237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.980284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.980370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.980395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.980476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.980503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.980599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.980629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.980741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.980771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.980868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.980894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.980972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.429 [2024-07-15 21:47:27.980998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.429 qpair failed and we were unable to recover it. 00:25:37.429 [2024-07-15 21:47:27.981085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.981110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.981203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.981229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.981307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.981332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.981410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.981446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.981548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.981577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.981662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.981689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.981787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.981830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.981934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.981964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.982047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.982075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.982164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.982191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.982275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.982302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.982383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.982435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.982559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.982585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.982695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.982749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.982870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.982913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.983034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.983090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.983245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.983295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.983424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.983483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.983645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.983703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.983796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.983827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.983923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.983951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.984065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.984108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.984201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.984228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.984327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.984357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.984445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.984471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.984548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.984581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.984685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.984713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.984820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.984862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.984967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.985008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.985134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.985196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.985301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.985354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.985488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.985538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.985649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.985697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.985820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.985846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.985970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.985999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.986109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.986164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.986258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.986286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.986363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.986390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.986507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.986540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.986646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.986677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.430 [2024-07-15 21:47:27.986793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.430 [2024-07-15 21:47:27.986847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.430 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.986970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.987012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.987127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.987180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.987312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.987338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.987422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.987448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.987559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.987615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.987727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.987780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.987950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.987993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.988164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.988209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.988327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.988375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.988487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.988535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.988660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.988712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.988847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.988876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.988964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.988991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.989090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.989121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.989251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.989308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.989421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.989471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.989588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.989650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.989759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.989812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.989937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.989970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.990057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.990091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.990198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.990227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.990308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.990334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.990416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.990443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.990535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.990578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.990673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.990703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.990795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.990824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.990919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.990949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.991028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.991056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.991144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.991172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.991263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.991290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.991372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.991398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.991479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.991506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.991586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.991612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.991703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.991730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.991819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.991845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.991944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.991975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.992066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.992118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.992289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.992329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.992433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.992479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.992600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.992653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.431 [2024-07-15 21:47:27.992762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.431 [2024-07-15 21:47:27.992801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.431 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.992899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.992947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.993071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.993120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.993252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.993304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.993416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.993463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.993605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.993648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.993759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.993811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.993947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.993998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.994113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.994179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.994295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.994344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.994460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.994507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.994653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.994694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.994806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.994836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.994948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.994975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.995099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.995159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.995275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.995328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.995464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.995513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.995622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.995669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.995782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.995826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.995964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.996007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.996125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.996163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.996256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.996303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.996422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.996477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.996613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.996656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.996775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.996824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.996941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.996972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.997066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.997106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.997231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.997284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.997405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.997459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.997582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.997627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.997753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.997802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.997917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.997969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.998079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.998129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.998276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.998317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.998423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.998469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.432 [2024-07-15 21:47:27.998582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.432 [2024-07-15 21:47:27.998634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.432 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:27.998755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:27.998784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:27.998891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:27.998917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:27.999020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:27.999068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:27.999216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:27.999266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:27.999370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:27.999423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:27.999569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:27.999642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:27.999781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:27.999823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:27.999952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:27.999982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.000077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.000109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.000218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.000251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.000364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.000412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.000528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.000592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.000705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.000742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.000872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.000899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.001014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.001054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.001192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.001219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.001315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.001356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.001442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.001469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.001554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.001581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.001704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.001745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.001856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.001915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.002027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.002057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.002193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.002237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.002364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.002406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.002516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.002571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.002700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.002742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.002873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.002902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.002999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.003029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.003155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.003202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.003302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.003333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.003442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.003472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.003587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.003629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.003713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.003740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.003819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.003845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.003937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.003964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.004054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.004082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.004172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.004198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.004288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.004314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.004404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.004431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.004511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.004537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.433 [2024-07-15 21:47:28.004666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.433 [2024-07-15 21:47:28.004720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.433 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.004839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.004882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.004984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.005033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.005145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.005196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.005316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.005371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.005492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.005551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.005664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.005715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.005841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.005870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.005971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.006002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.006102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.006129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.006224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.006252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.006334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.006360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.006437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.006463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.006540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.006566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.006654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.006684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.006771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.006797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.006881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.006908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.006989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.007015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.007100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.007133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.007236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.007264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.007353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.007384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.007471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.007498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.007582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.007611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.007697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.007724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.007814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.007841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.007927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.007952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.008033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.008059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.008149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.008177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.008278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.008308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.008408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.008463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.008590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.008619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.008716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.008759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.008837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.008863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.008967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.009012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.009107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.009144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.009233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.009262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.009360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.009387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.009469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.009494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.009583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.009617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.009705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.009733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.009810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.009836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.434 qpair failed and we were unable to recover it. 00:25:37.434 [2024-07-15 21:47:28.009927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.434 [2024-07-15 21:47:28.009955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.010049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.010090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.010180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.010207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.010292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.010318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.010414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.010454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.010553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.010583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.010722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.010791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.010935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.010981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.011079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.011113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.011229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.011286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.011381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.011412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.011501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.011528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.011613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.011639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.011727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.011758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.011852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.011881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.011962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.011988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.012075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.012101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.012194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.012221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.012310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.012339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.012430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.012460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.012566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.012624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.012752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.012779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.012874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.012919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.013029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.013067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.013154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.013202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.013331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.013386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.013527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.013556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.013658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.013689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.013802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.013834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.013950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.014004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.014089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.014116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.014231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.014283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.014400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.014448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.014588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.014629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.014744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.014797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.014906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.014944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.015050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.015103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.015252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.015306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.015414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.015464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.015578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.015630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.015747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.015800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.435 [2024-07-15 21:47:28.015913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.435 [2024-07-15 21:47:28.015952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.435 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.016062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.016088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.016221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.016263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.016374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.016425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.016557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.016608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.016733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.016782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.016861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.016888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.016970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.016996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.017070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.017097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.017183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.017210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.017296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.017321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.017398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.017424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.017510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.017536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.017618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.017644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.017744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.017769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.017857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.017883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.017987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.018042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.018159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.018211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.018331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.018362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.018466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.018513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.018628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.018680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.018801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.018847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.018972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.019023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.019163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.019198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.019313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.019362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.019470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.019513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.019596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.019623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.019719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.019749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.019849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.019878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.019971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.020000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.020089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.020115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.020215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.020243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.020361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.020406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.020489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.020514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.020605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.020636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.020736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.436 [2024-07-15 21:47:28.020762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.436 qpair failed and we were unable to recover it. 00:25:37.436 [2024-07-15 21:47:28.020841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.020867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.020964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.020994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.021117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.021166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.021272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.021315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.021417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.021450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.021557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.021588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.021682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.021732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.021846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.021895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.022023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.022052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.022156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.022204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.022302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.022333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.022433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.022472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.022566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.022592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.022679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.022707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.022787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.022814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.022895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.022921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.022998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.023024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.023112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.023146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.023236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.023263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.023348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.023375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.023479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.023510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.023611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.023643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.023744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.023787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.023888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.023929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.024035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.024061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.024153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.024180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.024286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.024316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.024422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.024452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.024557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.024583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.024685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.024712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.024792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.024819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.024900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.024926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.025016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.025042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.025120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.025152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.025233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.025258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.025348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.025375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.025472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.025498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.025584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.025609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.025700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.025729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.025813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.025839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.437 [2024-07-15 21:47:28.025919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.437 [2024-07-15 21:47:28.025945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.437 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.026043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.026093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.026230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.026256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.026354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.026403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.026518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.026568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.026692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.026742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.026855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.026904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.027021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.027049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.027134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.027167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.027247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.027280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.027369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.027396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.027492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.027523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.027616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.027642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.027727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.027753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.027829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.027855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.027940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.027967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.028054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.028079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.028167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.028194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.028279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.028306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.028392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.028418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.028499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.028525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.028611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.028639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.028726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.028753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.028845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.028872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.028954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.028981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.029063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.029090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.029175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.029201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.029288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.029318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.029408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.029436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.029519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.029545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.029624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.029650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.029743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.029777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.029864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.029890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.030024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.030052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.030134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.030167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.030254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.030282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.030375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.030402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.030484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.030510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.030610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.030650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.030726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.030752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.030839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.030865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.030947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.030974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.031060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.438 [2024-07-15 21:47:28.031087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.438 qpair failed and we were unable to recover it. 00:25:37.438 [2024-07-15 21:47:28.031176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.031203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.031289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.031317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.031407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.031432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.031560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.031586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.031671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.031697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.031783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.031809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.031896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.031929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.032017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.032043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.032124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.032155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.032272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.032318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.032437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.032489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.032602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.032653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.032759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.032807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.032914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.032961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.033076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.033122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.033256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.033302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.033422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.033472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.033587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.033638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.033772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.033801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.033880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.033906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.034024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.034078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.034187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.034220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.034369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.034395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.034493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.034542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.034660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.034712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.034834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.034864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.034968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.035010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.035129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.035183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.035278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.035308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.035399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.035423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.035524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.035553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.035696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.035722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.035849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.035875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.035965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.035996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.036076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.036103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.036189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.036217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.036307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.036340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.036426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.036453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.036552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.036605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.036726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.036778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.036896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.036943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.037078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.037109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.037206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.037234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.037319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.439 [2024-07-15 21:47:28.037346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.439 qpair failed and we were unable to recover it. 00:25:37.439 [2024-07-15 21:47:28.037449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.037492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.037589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.037641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.037746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.037793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.037875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.037901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.038001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.038031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.038161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.038205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.038304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.038335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.038441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.038471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.038566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.038591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.038682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.038710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.038789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.038815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.038904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.038931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.039023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.039050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.039126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.039158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.039274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.039307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.039411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.039437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.039550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.039581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.039676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.039706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.039817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.039845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.039944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.039975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.040085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.040115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.040219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.040245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.040338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.040368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.040488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.040530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.040620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.040648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.040739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.040765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.040854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.040881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.040969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.040995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.041070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.041096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.041205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.041240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.041352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.041395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.041499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.041542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.041642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.041681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.041801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.041843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.041939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.041970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.042069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.042095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.042192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.042219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.042323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.042365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.042468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.042510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.042607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.042638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.440 qpair failed and we were unable to recover it. 00:25:37.440 [2024-07-15 21:47:28.042745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.440 [2024-07-15 21:47:28.042775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.042889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.042932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.043025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.043065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.043196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.043254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.043410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.043453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.043573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.043621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.043737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.043780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.043903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.043944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.044063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.044089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.044171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.044198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.044302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.044344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.044443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.044473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.044583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.044613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.044724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.044755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.044849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.044895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.044996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.045041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.045175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.045227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.045350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.045400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.045530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.045567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.045685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.045733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.045854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.045881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.045982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.046036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.046133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.046168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.046276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.046319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.046416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.046449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.046569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.046611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.046697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.046724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.046808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.441 [2024-07-15 21:47:28.046834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.441 qpair failed and we were unable to recover it. 00:25:37.441 [2024-07-15 21:47:28.046920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.046947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.047037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.047066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.047152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.047199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.047315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.047368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.047504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.047544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.047664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.047711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.047830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.047881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.047998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.048026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.048129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.048178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.048285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.048327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.048422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.048452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.048568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.048600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.048715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.048758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.048865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.048907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.049016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.049069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.049193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.049244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.049365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.049415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.049527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.049575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.049685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.049736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.049860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.049888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.049970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.049996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.050119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.050196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.050341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.050383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.050493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.050543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.050662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.050687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.050795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.050841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.050959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.051010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.051121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.051170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.051281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.051332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.051445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.051492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.051624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.051651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.051777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.051826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.051944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.051983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.052089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.052121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.052237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.052262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.052363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.052411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.052536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.052562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.052696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.052742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.052854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.052905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.053026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.053052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.442 qpair failed and we were unable to recover it. 00:25:37.442 [2024-07-15 21:47:28.053163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.442 [2024-07-15 21:47:28.053214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.053333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.053392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.053526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.053567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.053679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.053729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.053834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.053882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.054002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.054028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.054178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.054205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.054339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.054393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.054513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.054560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.054667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.054698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.054797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.054843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.054959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.055007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.055118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.055178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.055294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.055346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.055451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.055501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.055611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.055659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.055782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.055807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.055910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.055958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.056083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.056108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.056220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.056270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.056384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.056410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.056523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.056549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.056649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.056697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.056812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.056858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.056996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.057036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.057148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.057188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.057310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.057337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.057468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.057520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.057656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.057696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.057807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.057857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.057989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.058030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.058134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.058185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.058295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.058346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.058476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.058517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.058656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.058686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.058797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.058845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.058926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.058957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.059043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.059070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.059156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.059185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.059277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.059309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.059400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.443 [2024-07-15 21:47:28.059429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.443 qpair failed and we were unable to recover it. 00:25:37.443 [2024-07-15 21:47:28.059515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.059546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.059626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.059652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.059738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.059765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.059842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.059868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.059954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.059980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.060076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.060121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.060247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.060299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.060413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.060464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.060585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.060614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.060704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.060736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.060825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.060853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.060947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.060990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.061085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.061113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.061206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.061234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.061326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.061355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.061446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.061474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.061560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.061587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.061678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.061708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.061802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.061830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.061909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.061934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.062029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.062058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.062160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.062187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.062273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.062300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.062385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.062410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.062498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.062524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.062606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.062632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.062721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.062750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.062835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.062861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.062970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.063014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.063122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.063172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.063275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.063306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.063416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.063448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.063539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.063566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.063647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.063672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.063769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.063812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.063914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.063943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.064029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.064056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.064155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.064204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.064330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.064356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.064497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.064528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.064667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.444 [2024-07-15 21:47:28.064711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.444 qpair failed and we were unable to recover it. 00:25:37.444 [2024-07-15 21:47:28.064836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.064875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.064992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.065033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.065165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.065193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.065301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.065326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.065428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.065479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.065604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.065643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.065759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.065810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.065923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.065975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.066075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.066126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.066239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.066287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.066410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.066458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.066569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.066617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.066749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.066789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.066908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.066959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.067065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.067113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.067282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.067330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.067450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.067515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.067644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.067674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.067762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.067788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.067869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.067897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.067979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.068007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.068092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.068118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.068239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.068283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.068397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.068436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.068542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.068572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.068682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.068733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.068871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.068922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.069026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.069079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.069214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.069242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.069337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.069368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.069478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.069508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.069612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.069642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.069743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.445 [2024-07-15 21:47:28.069774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.445 qpair failed and we were unable to recover it. 00:25:37.445 [2024-07-15 21:47:28.069873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.069900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.069984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.070010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.070085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.070111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.070199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.070226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.070301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.070327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.070415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.070440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.070524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.070555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.070638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.070665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.070752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.070778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.070868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.070894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.070980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.071006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.071115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.071148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.071237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.071263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.071337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.071362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.071463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.071505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.071602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.071633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.071751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.071793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.071883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.071909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.071998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.072025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.072118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.072157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.072271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.072302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.072420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.072462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.072559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.072589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.072692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.072719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.072799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.072825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.072908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.072934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.073014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.073041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.073158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.073192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.073287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.073324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.073451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.073507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.073643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.073694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.073805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.073835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.073937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.073964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.074052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.074079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.074179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.074206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.074288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.074314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.074398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.074425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.074509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.074535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.074638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.074663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.074744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.074783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.446 [2024-07-15 21:47:28.074887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.446 [2024-07-15 21:47:28.074917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.446 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.074999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.075024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.075107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.075168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.075287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.075336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.075442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.075487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.075628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.075656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.075759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.075806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.075908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.075949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.076041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.076067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.076152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.076180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.076263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.076289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.076368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.076395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.076483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.076509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.076592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.076617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.076700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.076725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.076810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.076836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.076959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.077001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.077123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.077179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.077303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.077349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.077477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.077528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.077650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.077697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.077830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.077877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.078012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.078041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.078148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.078191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.078294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.078338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.078425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.078452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.078545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.078575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.078672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.078698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.078779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.078806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.078894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.078921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.079012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.079038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.079120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.079158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.079247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.079273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.079369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.079398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.079487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.079513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.079597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.079624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.079717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.079745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.079829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.079854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.079947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.079989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.080087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.080114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.080213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.080240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.447 qpair failed and we were unable to recover it. 00:25:37.447 [2024-07-15 21:47:28.080336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.447 [2024-07-15 21:47:28.080366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.080468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.080495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.080590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.080622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.080705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.080731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.080830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.080856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.080966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.081045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.081171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.081198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.081327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.081385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.081529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.081556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.081639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.081664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.081742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.081768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.081869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.081913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.082002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.082031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.082116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.082153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.082265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.082337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.082462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.082488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.082564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.082591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.082669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.082695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.082786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.082844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.082963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.083028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.083163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.083190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.083289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.083350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.083458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.083517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.083630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.083685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.083824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.083882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.083992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.084059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.084170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.084218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.084343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.084413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.084537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.084562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.084662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.084720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.084836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.084882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.085014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.085070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.085189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.085232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.085349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.085377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.085487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.085554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.085681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.085731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.085841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.085898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.086008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.086061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.086203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.086270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.086392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.086462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.086595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.086636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.448 [2024-07-15 21:47:28.086801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.448 [2024-07-15 21:47:28.086862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.448 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.086978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.087049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.087168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.087230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.087364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.087423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.087531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.087595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.087733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.087816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.087929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.087982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.088083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.088171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.088330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.088388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.088505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.088564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.088676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.088733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.088863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.088920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.089037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.089102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.089235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.089291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.089398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.089450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.089562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.089614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.089732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.089785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.089912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.089962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.090082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.090169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.090281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.090334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.090451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.090511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.090626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.090681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.090810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.090872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.090995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.091053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.091153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.091210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.091325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.091376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.091499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.091537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.091637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.091703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.091815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.091880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.091988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.092048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.092185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.092217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.092297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.092323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.092431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.092499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.092616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.092677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.092870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.092897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.093014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.449 [2024-07-15 21:47:28.093072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.449 qpair failed and we were unable to recover it. 00:25:37.449 [2024-07-15 21:47:28.093209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.093252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.093384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.093443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.093573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.093622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.093735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.093807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.093916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.093979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.094087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.094159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.094271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.094324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.094441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.094494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.094629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.094686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.094868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.094939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.095089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.095164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.095361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.095427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.095586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.095637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.095731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.095761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.095852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.095878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.095962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.095988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.096063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.096089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.096172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.096197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.096293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.096325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.096450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.096482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.096576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.096610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.096777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.096835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.096976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.097046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.097242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.097302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.097416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.097479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.097602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.097653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.097797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.097857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.097979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.098046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.098166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.098220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.098351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.098415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.098544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.098590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.098688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.098715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.098793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.098819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.098900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.098926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.099008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.099033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.099109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.099134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.099224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.099249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.099328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.099354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.099442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.099468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.099542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.099566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.099655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.450 [2024-07-15 21:47:28.099687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.450 qpair failed and we were unable to recover it. 00:25:37.450 [2024-07-15 21:47:28.099800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.099825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.099945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.099973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.100057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.100082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.100227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.100292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.100477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.100536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.100661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.100723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.100847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.100910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.101088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.101130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.101255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.101323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.101445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.101511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.101624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.101687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.101875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.101935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.102046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.102117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.102251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.102322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.102458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.102498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.102651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.102709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.102824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.102881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.103007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.103067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.103193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.103257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.103435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.103494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.103668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.103725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.103852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.103918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.104041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.104096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.104226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.104296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.104417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.104487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.104613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.104671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.104792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.104856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.104974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.105030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.105156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.105210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.105343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.105369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.105466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.105535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.105715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.105774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.105903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.105963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.106084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.106166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.106315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.106375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.106495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.106560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.106678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.106729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.106859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.106888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.107032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.107086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.107173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.107236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.107366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.451 [2024-07-15 21:47:28.107422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.451 qpair failed and we were unable to recover it. 00:25:37.451 [2024-07-15 21:47:28.107553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.107624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.107745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.107800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.107923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.107981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.108168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.108231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.108340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.108390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.108522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.108582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.108786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.108849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 438086 Killed "${NVMF_APP[@]}" "$@" 00:25:37.452 [2024-07-15 21:47:28.108983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.109011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.109102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.109129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.109213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.109239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.109341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:37.452 [2024-07-15 21:47:28.109386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.109480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.109506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:37.452 [2024-07-15 21:47:28.109589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.109615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:37.452 [2024-07-15 21:47:28.109701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.109729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.109805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:37.452 [2024-07-15 21:47:28.109830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.109915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.109942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.452 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.110062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.110090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.110179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.110207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.110297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.110327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.110417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.110443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.110533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.110559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.110648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.110674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.110762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.110813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.110982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.111024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.111156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.111218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.111346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.111374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.111451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.111477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.111630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.111681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.111803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.111846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.111945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.111978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.112107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.112211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.112300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.112327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.112442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.112467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.112629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.112692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.112883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.112947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.113094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.113176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.113310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.113372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.113560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.113618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.452 qpair failed and we were unable to recover it. 00:25:37.452 [2024-07-15 21:47:28.113736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.452 [2024-07-15 21:47:28.113793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.113913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.113975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.114107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.114192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.114313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.114378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.114574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.114634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.114761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.114819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.114934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.115000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.115128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.115167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.115321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.115346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.115486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.115520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.115713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.115744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.115861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.115892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.116022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.116051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.116156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.116196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.116299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.116353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.453 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=438515 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:37.453 [2024-07-15 21:47:28.116488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 438515 00:25:37.453 [2024-07-15 21:47:28.116555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.116701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.116750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 438515 ']' 00:25:37.453 [2024-07-15 21:47:28.116859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.116886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.116969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.453 [2024-07-15 21:47:28.116996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.117094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.117129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b9 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:37.453 0 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.117275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.117301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.453 [2024-07-15 21:47:28.117406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.117454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:37.453 [2024-07-15 21:47:28.117571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.117619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.453 [2024-07-15 21:47:28.117730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.117776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.117891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.117946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.118064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.118113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.118246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.118284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.118424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.118476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.118607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.118660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.118826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.118886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.119008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.119044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.119160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.119246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.119410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.119459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.119598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.119643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.119797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.119860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.120007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.120051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.120212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.453 [2024-07-15 21:47:28.120271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.453 qpair failed and we were unable to recover it. 00:25:37.453 [2024-07-15 21:47:28.120358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.120385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.120480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.120514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.120632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.120679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.120787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.120832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.120915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.120942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.121025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.121053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.121164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.121198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.121282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.121309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.121403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.121440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.121538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.121564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.121654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.121680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.121760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.121786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.121871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.121898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.121978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.122004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.122094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.122120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.122212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.122238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.122338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.122384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.122485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.122534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.122635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.122682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.122787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.122840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.122945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.122991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.123089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.123135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.123237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.123271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.123387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.123440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.123518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.123544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.123660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.123733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.123881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.123923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.124045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.124086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.124200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.124258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.124378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.124428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.124558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.124615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.124753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.124782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.124891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.124940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.125042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.125076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.454 qpair failed and we were unable to recover it. 00:25:37.454 [2024-07-15 21:47:28.125176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.454 [2024-07-15 21:47:28.125204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.125280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.125306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.125387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.125413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.125491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.125517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.125593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.125619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.125700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.125726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.125810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.125837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.125921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.125947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.126037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.126066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.126175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.126216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.126332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.126370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.126459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.126486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.126571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.126598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.126685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.126712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.126789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.126815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.126900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.126926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.127005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.127032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.127114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.127146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.127231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.127257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.127339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.127366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.127447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.127473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.127558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.127583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.127663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.127689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.127781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.127811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.127904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.127941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.128047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.128085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.128182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.128211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.128312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.128346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.128526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.128587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.128725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.128806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.128995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.129064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.129206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.129269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.129405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.129439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.129588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.129648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.129763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.129802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.129911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.129968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.130093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.130165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.130299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.130347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.130480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.130528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.130648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.130683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.455 [2024-07-15 21:47:28.130802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.455 [2024-07-15 21:47:28.130848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.455 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.130946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.130979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.131070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.131096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.131207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.131253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.131354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.131402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.131505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.131550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.131642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.131668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.131768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.131813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.131918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.131963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.132050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.132075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.132162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.132189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.132274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.132301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.132391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.132418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.132495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.132520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.132606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.132633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.132717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.132744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.132827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.132853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.132941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.132968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.133056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.133084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.133163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.133190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.133276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.133303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.133389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.133416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.133496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.133522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.133602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.133628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.133708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.133735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.133811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.133844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.133922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.133949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.134026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.134052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.134150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.134180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.134279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.134312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.134425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.134493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.134628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.134689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.134820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.134874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.134999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.135057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.135202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.135249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.135346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.135417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.135565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.135624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.135740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.135802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.135927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.135995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.136122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.136162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.136266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.136312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.456 [2024-07-15 21:47:28.136444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.456 [2024-07-15 21:47:28.136506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.456 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.136638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.136706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.136843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.136872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.137008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.137081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.137268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.137362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.137502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.137567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.137687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.137747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.137866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.137921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.138051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.138115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.138248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.138310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.138426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.138497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.138633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.138708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.138839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.138885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.139007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.139041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.139144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.139171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.139257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.139284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.139368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.139394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.139477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.139504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.139587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.139613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.139699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.139725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.139832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.139867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.139979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.140044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.140186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.140213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.140345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.140391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.140537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.140609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.140737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.140806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.140938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.140966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.141069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.141114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.141225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.141288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.141463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.141531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.141662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.141719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.141847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.141911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.142046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.142105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.142252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.142311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.142441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.142477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.142619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.142686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.142813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.142869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.142992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.143020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.143163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.143216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.143357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.143405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.143530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.457 [2024-07-15 21:47:28.143570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.457 qpair failed and we were unable to recover it. 00:25:37.457 [2024-07-15 21:47:28.143685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.143719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.143842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.143895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.144020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.144079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.144216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.144262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.144387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.144440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.144573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.144636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.144764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.144830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.144953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.145012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.145151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.145212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.145336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.145364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.145479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.145529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.145630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.145676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.145758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.145786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.145871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.145898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.145984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.146011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.146089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.146115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.146214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.146240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.146326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.146352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.146457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.146521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.146645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.146709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.146848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.146912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.147050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.147078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.147174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.147209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.147326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.147365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.147476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.147509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.147608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.147633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.147715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.147742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.147827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.147853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.147936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.147963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.148045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.148072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.148157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.148183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.148265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.148291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.148378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.148405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.148488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.148517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.148605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.458 [2024-07-15 21:47:28.148633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.458 qpair failed and we were unable to recover it. 00:25:37.458 [2024-07-15 21:47:28.148726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.148755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.148835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.148861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.148950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.148976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.149066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.149092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.149196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.149230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.149373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.149429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.149567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.149627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.149780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.149824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.149955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.150001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.150079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.150105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.150213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.150258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.150345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.150371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.150450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.150475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.150566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.150594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.150672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.150698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.150778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.150815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.150909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.150938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.151019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.151073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.151208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.151271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.151386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.151444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.151586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.151633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.151757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.151815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.151952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.152000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.152102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.152129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.152248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.152294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.152386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.152419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.152541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.152582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.152683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.152727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.152823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.152857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.152966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.152992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.153073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.153099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.153212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.153255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.153341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.153367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.153467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.153500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.153621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.153665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.153762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.153795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.153884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.153910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.153998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.154026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.154115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.154149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.154285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.154310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.459 [2024-07-15 21:47:28.154407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.459 [2024-07-15 21:47:28.154463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.459 qpair failed and we were unable to recover it. 00:25:37.460 [2024-07-15 21:47:28.154602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.460 [2024-07-15 21:47:28.154648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.460 qpair failed and we were unable to recover it. 00:25:37.460 [2024-07-15 21:47:28.154785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.460 [2024-07-15 21:47:28.154846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.460 qpair failed and we were unable to recover it. 00:25:37.460 [2024-07-15 21:47:28.154979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.460 [2024-07-15 21:47:28.155043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.460 qpair failed and we were unable to recover it. 00:25:37.460 [2024-07-15 21:47:28.155195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.460 [2024-07-15 21:47:28.155248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.460 qpair failed and we were unable to recover it. 00:25:37.460 [2024-07-15 21:47:28.155372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.460 [2024-07-15 21:47:28.155413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.460 qpair failed and we were unable to recover it. 00:25:37.460 [2024-07-15 21:47:28.155539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.460 [2024-07-15 21:47:28.155566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.460 qpair failed and we were unable to recover it. 00:25:37.460 [2024-07-15 21:47:28.155669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.460 [2024-07-15 21:47:28.155702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.460 qpair failed and we were unable to recover it. 00:25:37.460 [2024-07-15 21:47:28.155872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.460 [2024-07-15 21:47:28.155922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.460 qpair failed and we were unable to recover it. 00:25:37.460 [2024-07-15 21:47:28.156092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.460 [2024-07-15 21:47:28.156188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.460 qpair failed and we were unable to recover it. 00:25:37.460 [2024-07-15 21:47:28.156355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.460 [2024-07-15 21:47:28.156393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.460 qpair failed and we were unable to recover it. 00:25:37.460 [2024-07-15 21:47:28.156563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.460 [2024-07-15 21:47:28.156667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.460 qpair failed and we were unable to recover it. 00:25:37.460 [2024-07-15 21:47:28.156841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.460 [2024-07-15 21:47:28.156919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.460 qpair failed and we were unable to recover it. 00:25:37.460 [2024-07-15 21:47:28.157054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.460 [2024-07-15 21:47:28.157081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.460 qpair failed and we were unable to recover it. 00:25:37.460 [2024-07-15 21:47:28.157208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.460 [2024-07-15 21:47:28.157286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.460 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.157425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.157514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.157626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.157685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.157814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.157884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.158028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.158096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.158251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.158311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.158436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.158500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.158623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.158697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.158835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.158894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.159015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.159072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.159216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.159280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.159406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.159472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.159594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.159655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.159797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.159865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.160030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.160092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.160268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.160328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.160425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.160488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.160608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.160673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.160800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.160855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.160988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.161056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.161171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.161206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.161318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.161387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.161502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.161558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.161691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.161717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.161825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.161881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.162013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.162072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.162211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.162253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.162355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.162417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.162543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.162612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.162759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.162811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.162927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.162986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.163115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.163192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.163346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.163410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.163554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.163584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.163690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.163735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.163819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.163845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.163956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.739 [2024-07-15 21:47:28.163999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.739 qpair failed and we were unable to recover it. 00:25:37.739 [2024-07-15 21:47:28.164099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.164161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.164258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.164285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.164398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.164425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.164545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.164578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.164698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.164746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.164918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.164963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.165043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.165069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.165206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.165257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.165395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.165451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.165589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.165616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.165801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.165860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.165983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.166054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.166247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.166307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.166434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.166490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.166691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.166751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.166924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.166995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.167213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.167285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.167434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.167492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.167698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.167740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.167850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.167917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.168212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.168271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.168467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.168543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.168704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.168772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.168956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.169018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.169157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.169206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.169342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.169401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.169531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.169590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.169775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.169838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.169963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.170034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.170172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.170199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.170351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.170409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.170634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.170679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.170917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.170973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.171086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.171156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.171289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.171351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.171538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.171601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.171723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.171782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.171981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.172042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.172257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.172326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.172478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.740 [2024-07-15 21:47:28.172498] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:25:37.740 [2024-07-15 21:47:28.172548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.740 qpair failed and we were unable to recover it. 00:25:37.740 [2024-07-15 21:47:28.172628] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.741 [2024-07-15 21:47:28.172709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.172742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.172856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.172888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.173019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.173051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.173229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.173265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.173377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.173402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.173551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.173583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.173687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.173722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.173849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.173911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.174033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.174068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.174237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.174300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.174419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.174485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.174607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.174679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.174799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.174871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.174990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.175053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.175239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.175311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.175430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.175500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.175619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.175680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.175925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.175984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.176103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.176194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.176293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.176355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.176546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.176605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.176735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.176761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.176983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.177040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.177171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.177233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.177360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.177435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.177645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.177680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.177790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.177819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.177928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.178000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.178130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.178191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.178317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.178373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.178513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.178542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.178627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.178655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.178765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.178817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.178896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.178922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.179005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.179033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.179148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.179175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.179286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.179312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.179428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.179455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.179557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.179631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.179753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.741 [2024-07-15 21:47:28.179821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.741 qpair failed and we were unable to recover it. 00:25:37.741 [2024-07-15 21:47:28.179944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.179978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.180074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.180135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.180269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.180332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.180459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.180543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.180728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.180786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.180908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.180973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.181098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.181131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.181246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.181310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.181493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.181551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.181671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.181742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.181949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.182012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.182133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.182197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.182317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.182381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.182552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.182602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.182797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.182854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.182978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.183038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.183175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.183202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.183422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.183491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.183678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.183743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.183875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.183957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.184073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.184174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.184287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.184352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.184486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.184512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.184693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.184751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.184926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.184984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.185097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.185174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.185664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.185725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.185846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.185887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.185976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.186005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.186102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.186130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.186241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.186294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.186377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.186404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.186516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.186542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.186652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.186678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.186756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.186782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.186890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.186916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.187047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.187102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.187207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.187253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.187330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.187356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.187433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.187459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.742 [2024-07-15 21:47:28.187602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.742 [2024-07-15 21:47:28.187651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.742 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.187793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.187843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.187919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.187944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.188056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.188122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.188346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.188413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.188544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.188597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.188721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.188774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.188911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.188941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.189064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.189105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.189231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.189280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.189379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.189407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.189493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.189519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.189691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.189744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.189897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.189948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.190151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.190196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.190323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.190403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.190556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.190598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.190742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.190795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.190898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.190946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.191048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.191094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.191273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.191324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.191535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.191588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.191697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.191744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.191826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.191854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.191937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.191964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.192054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.192082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.192190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.192217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.192309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.192336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.192416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.192444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.192560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.192588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.192669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.192711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.192803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.192831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.743 [2024-07-15 21:47:28.192921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.743 [2024-07-15 21:47:28.192950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.743 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.193134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.193168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.193275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.193321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.193425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.193454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.193567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.193609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.193729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.193793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.193926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.193987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.194118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.194166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.194287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.194360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.194522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.194593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.194738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.194787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.194904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.194939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.195039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.195066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.195191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.195218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.195379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.195430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.195581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.195635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.195742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.195788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.195878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.195906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.196067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.196123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.196250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.196312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.196420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.196465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.196569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.196617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.196760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.196813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.196979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.197026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.197230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.197289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.197449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.197505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.197646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.197725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.197872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.197936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.198055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.198116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.198275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.198333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.198540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.198607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.198805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.198889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.199091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.199167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.199364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.199429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.199677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.199742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.744 [2024-07-15 21:47:28.199876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.744 [2024-07-15 21:47:28.199903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.744 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.200032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.200077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.200261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.200327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.200435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.200482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.200637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.200689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.200788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.200824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.201027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.201077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.201224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.201275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.201358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.201385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.201532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.201582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.201682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.201718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.201865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.201909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.201992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.202019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.202125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.202175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.202365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.202422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.202558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.202619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.202810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.202873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.203005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.203033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.203118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.203148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.203229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.203256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.203365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.203410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.203496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.203523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.203634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.203662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.203746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.203773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.203852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.203878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.204013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.204061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.204150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.204192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.204359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.204411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.204516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.204544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.204694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.204738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.204938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.205002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.205176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.205224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.205366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.205422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.205554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.205591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.205707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.205760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.205904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.205963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.206094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.206167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.206301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.206362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.206495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.206522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.206619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.206657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.745 [2024-07-15 21:47:28.206816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.745 [2024-07-15 21:47:28.206867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.745 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.206994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.207045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.207156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.207203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.207308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.207354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.207471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.207532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.207634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.207682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.207786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.207833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.207921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.207950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.208060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.208086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.208176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.208204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.208294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.208323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.208412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.208442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.208556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.208583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.208682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.208708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.208843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.208894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.208997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.209044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.209223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.209273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.209411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.209462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.209665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.209728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.209897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.209943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.210070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.210118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.210273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.210331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.746 [2024-07-15 21:47:28.210476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.210528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.210677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.210743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.210863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.210921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.211077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.211153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.211319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.211359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.211502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.211550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.211677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.211705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.211893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.211954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.212088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.212119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.212324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.212390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.212571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.212636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.212771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.212801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.212905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.212957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.213083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.213151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.746 qpair failed and we were unable to recover it. 00:25:37.746 [2024-07-15 21:47:28.213280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.746 [2024-07-15 21:47:28.213333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.213469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.213530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.213718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.213779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.213909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.213936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.214062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.214087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.214190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.214230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.214354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.214380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.214461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.214486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.214577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.214604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.214682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.214708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.214798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.214827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.214914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.214940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.215028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.215055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.215172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.215199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.215279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.215305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.215419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.215445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.215539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.215565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.215648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.215674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.215759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.215788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.215873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.215899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.215991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.216020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.216105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.216136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.216230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.216257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.216343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.216371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.216447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.216473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.216548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.216574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.216659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.216686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.216768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.216794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.216879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.216907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.216993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.217020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.217115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.217150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.217268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.217296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.217388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.217414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.217495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.217521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.217596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.217622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.217708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.217735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.217811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.217837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.217927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.217955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.747 qpair failed and we were unable to recover it. 00:25:37.747 [2024-07-15 21:47:28.218049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.747 [2024-07-15 21:47:28.218075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.218162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.218189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.218273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.218299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.218382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.218409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.218485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.218511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.218591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.218620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.218710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.218739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.218828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.218856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.218938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.218964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.219048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.219074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.219174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.219201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.219319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.219344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.219431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.219456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.219537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.219562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.219645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.219671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.219749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.219775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.219850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.219876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.219953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.219978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.220055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.220080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.220165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.220193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.220279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.220306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.220391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.220416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.220525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.220552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.220627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.220658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.220736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.220762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.220844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.220871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.220985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.221013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.221105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.221134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.221229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.221256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.221344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.221370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.221457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.221484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.221569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.748 [2024-07-15 21:47:28.221595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.748 qpair failed and we were unable to recover it. 00:25:37.748 [2024-07-15 21:47:28.221683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.221712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.221798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.221826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.221909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.221935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.222021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.222046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.222130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.222161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.222252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.222278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.222360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.222386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.222472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.222498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.222580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.222606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.222720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.222745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.222830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.222856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.222948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.222977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.223061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.223089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.223181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.223210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.223293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.223319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.223412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.223438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.223552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.223579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.223663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.223689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.223781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.223810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.223897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.223923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.224008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.224035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.224123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.224156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.224246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.224271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.224381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.224406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.224489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.224517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.224596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.224622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.224723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.224750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.224841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.224867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.224950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.224976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.225059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.225088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.225188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.225216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.225295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.225320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.225405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.225430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.225517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.225542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.225620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.225645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.225724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.225749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.225823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.225848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.225925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.225950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.226028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.749 [2024-07-15 21:47:28.226054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.749 qpair failed and we were unable to recover it. 00:25:37.749 [2024-07-15 21:47:28.226149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.226178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.226264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.226291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.226407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.226436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.226522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.226548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.226630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.226656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.226737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.226764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.226851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.226879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.226960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.226986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.227072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.227098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.227196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.227222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.227339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.227365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.227481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.227509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.227596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.227623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.227709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.227738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.227818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.227844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.227932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.227957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.228039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.228064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.228150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.228176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.228286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.228312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.228395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.228423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.228516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.228544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.228628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.228656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.228738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.228764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.228845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.228871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.228961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.228987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.229068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.229093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.229194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.229222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.229308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.229334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.229417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.229443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.229530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.229556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.229677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.229703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.229796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.229824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.229911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.229939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.230027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.230053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.230155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.230181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.230265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.230290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.230374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.230400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.230482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.230509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.230595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.230622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.230733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.230758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.230840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.750 [2024-07-15 21:47:28.230866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.750 qpair failed and we were unable to recover it. 00:25:37.750 [2024-07-15 21:47:28.230942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.230968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.231048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.231074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.231160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.231187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.231267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.231292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.231402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.231427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.231513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.231543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.231623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.231648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.231728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.231754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.231836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.231862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.231950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.231976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.232056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.232082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.232186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.232213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.232294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.232320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.232402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.232429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.232507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.232533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.232608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.232634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.232720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.232749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.232836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.232864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.232952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.232981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.233066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.233093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.233188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.233215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.233297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.233323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.233418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.233445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.233526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.233554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.233643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.233670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.233752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.233778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.233876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.233902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.233978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.234004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.234085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.234110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.234206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.234233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.234313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.234339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.234426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.234452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.234538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.234571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.234649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.234675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.234752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.234776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.234854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.234878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.751 [2024-07-15 21:47:28.234956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.751 [2024-07-15 21:47:28.234981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.751 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.235064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.235089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.235179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.235204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.235285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.235311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.235390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.235418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.235502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.235531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.235613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.235641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.235722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.235748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.235839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.235865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.235949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.235975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.236099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.236128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.236231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.236257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.236349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.236378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.236466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.236492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.236571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.236597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.236710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.236735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.236820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.236846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.236928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.236955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.237034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.237060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.237148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.237176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.237267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.237293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.237381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.237407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.237485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.237511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.237628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.237657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.237746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.237774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.237862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.237890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.238005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.238032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.238120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.238156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.238271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.238297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.238383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.238412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.238502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.238529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.238640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.238666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.238772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.238798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.238881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.238906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.752 [2024-07-15 21:47:28.238997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-07-15 21:47:28.239023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.752 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.239105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.239132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.239238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.239265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.239359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.239387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.239470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.239498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.239588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.239617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.239693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.239718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.239801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.239827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.239922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.239949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.240034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.240059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.240147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.240175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.240256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.240282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.240366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.240394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.240484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.240509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.240607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.240634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.240721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.240746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.240838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.240864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.240947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.240972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.241052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.241078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.241160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.241187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.241267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.241292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.241380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.241408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.241491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.241519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.241607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.241633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.241715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.241741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.241813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.241838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.241921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.241947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.242029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.242054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.242145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.242173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.242261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.242295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.242381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.242409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.242495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.242521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.242601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.242626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.242708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.242733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.242821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.242849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.242933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.242960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.243046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.243072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.243157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.243185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.243267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.243294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.243371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.243397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.243480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.243508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.753 [2024-07-15 21:47:28.243597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-07-15 21:47:28.243626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.753 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.243713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.243741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.243838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.243864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.243949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.243977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.244068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.244094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.244187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.244215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.244304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.244331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.244419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.244446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.244527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.244553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.244638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.244666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.244753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.244781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.244861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.244887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.244972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.244999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.245088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.245116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.245149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:37.754 [2024-07-15 21:47:28.245214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.245242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.245332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.245361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.245437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.245464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.245556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.245583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.245673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.245700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.245780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.245806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.245893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.245921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.246004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.246031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.246119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.246153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.246239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.246266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.246353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.246379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.246452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.246479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.246566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.246593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.246690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.246717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.246803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.246834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.246924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.246949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.247036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.247063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.247154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.247183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.247264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.247291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.247372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.247397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.247522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.247547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.247629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.247655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.247746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.247774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.247865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.247892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.247984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.248013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.248095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.248121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.248216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.248243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.754 qpair failed and we were unable to recover it. 00:25:37.754 [2024-07-15 21:47:28.248330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-07-15 21:47:28.248356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.248441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.248467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.248560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.248588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.248671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.248700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.248781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.248807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.248896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.248922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.249011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.249037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.249119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.249154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.249286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.249312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.249393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.249419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.249508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.249535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.249619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.249645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.249724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.249749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.249833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.249859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.249936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.249966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.250054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.250083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.250175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.250203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.250287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.250313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.250391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.250417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.250491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.250516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.250597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.250622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.250710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.250737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.250820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.250845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.250931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.250960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.251092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.251119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.251210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.251237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.251328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.251354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.251431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.251458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.251540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.251567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.251657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.251682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.251758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.251784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.251872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.251897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.251974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.252000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.252092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.252120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.252207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.252233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.252318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.252346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.252429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.252457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.252541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.252567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.252650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.252675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.252756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.252781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.252865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.252892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.755 qpair failed and we were unable to recover it. 00:25:37.755 [2024-07-15 21:47:28.252980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.755 [2024-07-15 21:47:28.253008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.253097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.253124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.253207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.253232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.253321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.253346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.253429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.253455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.253537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.253564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.253654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.253681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.253764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.253790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.253871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.253897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.253983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.254008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.254096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.254123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.254215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.254242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.254321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.254346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.254431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.254461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.254542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.254568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.254654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.254682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.254770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.254796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.254880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.254907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.254987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.255012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.255098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.255123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.255219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.255246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.255332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.255360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.255448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.255473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.255554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.255579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.255666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.255691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.255776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.255801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.255883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.255909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.256004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.256030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.256123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.256168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.256251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.256276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.256371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.256399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.256494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.256523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.256622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.256655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.256746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.256773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.256862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.756 [2024-07-15 21:47:28.256890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.756 qpair failed and we were unable to recover it. 00:25:37.756 [2024-07-15 21:47:28.256975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.257003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.257090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.257117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.257215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.257242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.257325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.257350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.257437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.257465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.257548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.257577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.257661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.257686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.257761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.257786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.257865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.257890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.257982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.258009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.258092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.258119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.258226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.258255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.258334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.258361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.258446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.258471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.258552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.258578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.258657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.258684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.258770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.258797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.258888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.258915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.258996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.259022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.259114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.259147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.259230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.259255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.259340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.259365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.259448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.259474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.259555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.259580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.259664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.259693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.259780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.259806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.259891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.259919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.260008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.260035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.260111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.260146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.260234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.260260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.260452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.260477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.260556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.260582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.260668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.260696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.260783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.260810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.260892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.260919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.260998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.261023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.261104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.261129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.261225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.261252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.261333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.261358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.261438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.261463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.261548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.757 [2024-07-15 21:47:28.261574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.757 qpair failed and we were unable to recover it. 00:25:37.757 [2024-07-15 21:47:28.261667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.261695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.261780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.261806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.261895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.261921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.262001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.262027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.262115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.262151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.262242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.262274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.262357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.262383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.262471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.262498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.262580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.262606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.262686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.262712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.262796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.262824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.262901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.262927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.263019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.263047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.263134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.263166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.263249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.263276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.263362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.263387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.263470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.263495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.263578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.263605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.263689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.263714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.263813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.263842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.263932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.263959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.264054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.264082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.264175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.264203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.264292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.264319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.264405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.264432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.264510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.264536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.264728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.264754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.264843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.264868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.264945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.264970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.265053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.265079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.265170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.265198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.265286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.265318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.265402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.265430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.265515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.265542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.265629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.265655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.265742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.265767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.265856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.265884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.265965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.265993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.266074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.266103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.266192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.266218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.266301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.758 [2024-07-15 21:47:28.266326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.758 qpair failed and we were unable to recover it. 00:25:37.758 [2024-07-15 21:47:28.266408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.266433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.266519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.266544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.266628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.266653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.266736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.266765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.266851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.266877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.266969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.266995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.267082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.267108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.267192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.267220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.267420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.267449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.267535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.267561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.267645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.267672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.267755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.267781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.267864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.267890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.267969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.267997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.268082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.268111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.268214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.268243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.268335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.268362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.268459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.268487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.268575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.268601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.268682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.268708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.268792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.268818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.268908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.268935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.269015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.269041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.269122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.269156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.269240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.269268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.269353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.269380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.269469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.269497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.269583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.269608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.269695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.269721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.269798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.269824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.269924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.269954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.270040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.270069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.270161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.270187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.270273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.270299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.270383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.270409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.270495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.270524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.270603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.759 [2024-07-15 21:47:28.270632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.759 qpair failed and we were unable to recover it. 00:25:37.759 [2024-07-15 21:47:28.270722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.270750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.270841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.270868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.270957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.270985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.271062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.271087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.271221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.271250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.271340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.271367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.271460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.271488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.271583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.271609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.271688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.271714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.271802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.271828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.271908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.271936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.272021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.272047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.272131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.272167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.272256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.272282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.272364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.272390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.272476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.272503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.272580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.272607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.272686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.272713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.272796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.272823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.272911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.272937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.273021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.273056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.273148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.273174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.273259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.273284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.273373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.273401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.273487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.273516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.273603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.273631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.273713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.273739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.273823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.273850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.273932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.273959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.274044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.274071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.274151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.274178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.274265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.274291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.274375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.274400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.274484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.274512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.274602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.274628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.274712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.274740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.274831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.274859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.274952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.274981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.760 [2024-07-15 21:47:28.275061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.760 [2024-07-15 21:47:28.275086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.760 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.275179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.275207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.275294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.275319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.275395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.275420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.275509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.275535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.275625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.275652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.275731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.275756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.275948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.275975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.276055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.276082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.276196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.276225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.276307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.276333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.276416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.276443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.276530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.276556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.276646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.276675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.276766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.276795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.276877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.276904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.276990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.277015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.277090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.277116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.277207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.277235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.277315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.277341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.277424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.277450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.277532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.277558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.277638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.277669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.277750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.277777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.277868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.277896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.277971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.277997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.278076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.278103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.278190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.278217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.278301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.278328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.278414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.278441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.278523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.278550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.278626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.278651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.761 qpair failed and we were unable to recover it. 00:25:37.761 [2024-07-15 21:47:28.278732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.761 [2024-07-15 21:47:28.278758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.278837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.278869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.278961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.278990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.279074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.279100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.279196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.279224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.279307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.279334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.279419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.279445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.279529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.279555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.279637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.279663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.279744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.279771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.279858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.279886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.279971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.280000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.280080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.280109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.280203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.280231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.280315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.280341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.280425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.280451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.280539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.280565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.280643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.280673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.280763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.280792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.280881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.280909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.280995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.281021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.281107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.281134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.281230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.281257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.281341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.281368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.281450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.281477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.281566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.281593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.281674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.281699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.281787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.281815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.281891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.281917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.282004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.282033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.282119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.282151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.282244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.282271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.282355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.282382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.282463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.282489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.282575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.282602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.282683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.282710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.282791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.282817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.282904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.282931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.283011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.283038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.762 [2024-07-15 21:47:28.283119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.762 [2024-07-15 21:47:28.283151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.762 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.283238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.283263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.283344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.283369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.283458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.283486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.283579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.283608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.283699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.283727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.283811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.283839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.283919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.283945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.284027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.284053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.284135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.284169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.284255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.284283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.284367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.284394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.284479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.284506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.284586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.284610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.284689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.284714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.284794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.284819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.284907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.284934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.285017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.285042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.285120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.285156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.285238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.285264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.285348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.285377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.285455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.285481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.285569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.285597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.285678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.285704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.285788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.285814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.285890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.285916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.286002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.286030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.286112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.286144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.286230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.286258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.286344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.286370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.286449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.286480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.286563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.286590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.286683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.286711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.286797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.286822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.286909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.286936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.287021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.287047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.287129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.287163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.287245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.287274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.287358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.287386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.287478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.287504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.763 qpair failed and we were unable to recover it. 00:25:37.763 [2024-07-15 21:47:28.287586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.763 [2024-07-15 21:47:28.287612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.287701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.287726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.287805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.287834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.287925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.287952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.288049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.288077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.288161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.288192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.288277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.288303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.288391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.288417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.288503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.288532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.288617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.288645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.288733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.288761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.288847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.288873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.288963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.288990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.289071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.289098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.289203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.289230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.289312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.289339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.289426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.289452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.289536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.289563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.289650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.289676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.289783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.289809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.289882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.289909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.290000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.290028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.290111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.290136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.290243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.290270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.290355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.290381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.290472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.290498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.290583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.290610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.290694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.290721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.290809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.290835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.290915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.290941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.291025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.291051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.291133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.291164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.291253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.291281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.291360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.291386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.291462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.291488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.291567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.291596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.291676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.291704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.764 [2024-07-15 21:47:28.291798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.764 [2024-07-15 21:47:28.291827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.764 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.291918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.291945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.292029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.292056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.292161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.292189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.292279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.292305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.292385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.292411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.292489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.292515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.292597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.292624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.292700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.292732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.292821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.292847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.292928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.292954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.293035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.293061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.293148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.293176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.293263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.293287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.293362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.293388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.293473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.293499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.293591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.293617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.293710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.293738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.293823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.293851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.293929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.293955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.294033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.294059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.294146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.294172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.294265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.294292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.294376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.294404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.294512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.294537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.294624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.294650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.294747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.294772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.294859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.294888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.294966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.294992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.295073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.295100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.295189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.295215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.295295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.295320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.295420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.765 [2024-07-15 21:47:28.295445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.765 qpair failed and we were unable to recover it. 00:25:37.765 [2024-07-15 21:47:28.295522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.295547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.295634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.295661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.295753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.295786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.295877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.295903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.295987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.296017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.296105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.296131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.296231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.296258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.296349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.296376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.296458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.296484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.296580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.296608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.296700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.296728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.296813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.296841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.296930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.296956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.297154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.297180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.297258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.297284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.297371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.297398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.297486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.297514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.297591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.297616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.297690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.297715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.297795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.297821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.297906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.297931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.298012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.298037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.298123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.298158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.298243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.298270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.298355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.298380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.298463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.298490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.298570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.298598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.298690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.298718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.298805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.298832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.298923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.298950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.299030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.299057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.299151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.299177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.299258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.299285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.299379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.299406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.299495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.299522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.299603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.766 [2024-07-15 21:47:28.299630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.766 qpair failed and we were unable to recover it. 00:25:37.766 [2024-07-15 21:47:28.299712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.299738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.299822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.299849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.299931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.299957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.300047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.300080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.300178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.300207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.300296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.300323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.300408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.300439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.300526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.300554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.300642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.300669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.300753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.300779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.300867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.300894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.300976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.301004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.301086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.301112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.301200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.301228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.301323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.301349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.301432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.301458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.301534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.301560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.301643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.301671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.301760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.301789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.301867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.301894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.301979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.302005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.302082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.302107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.302194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.302220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.302298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.302323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.302400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.302425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.302517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.302546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.302628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.302653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.302743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.302771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.302857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.302883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.302969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.302995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.303076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.303103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.303194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.303222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.303307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.303332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.303416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.303444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.303525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.303551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.303636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.303665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.303748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.303775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.303860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.767 [2024-07-15 21:47:28.303886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.767 qpair failed and we were unable to recover it. 00:25:37.767 [2024-07-15 21:47:28.303967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.303993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.304090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.304117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.304207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.304236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.304323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.304351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.304433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.304459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.304547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.304574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.304654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.304682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.304777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.304804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.304888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.304914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.305014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.305040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.305119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.305153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.305282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.305308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.305389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.305415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.305492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.305518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.305599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.305624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.305706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.305733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.305816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.305845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.305931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.305959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.306046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.306073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.306162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.306189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.306265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.306291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.306372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.306398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.306490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.306517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.306600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.306627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.306710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.306735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.306817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.306844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.306932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.306957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.307046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.307075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.307160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.307188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.307278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.307304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.307387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.307413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.307496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.307521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.307607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.307635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.307722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.307749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.307828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.307856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.307947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.307981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.308073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.308100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.308290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.308319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.308412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.768 [2024-07-15 21:47:28.308440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.768 qpair failed and we were unable to recover it. 00:25:37.768 [2024-07-15 21:47:28.308527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.308553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.308638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.308665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.308742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.308769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.308853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.308881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.308972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.309001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.309088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.309118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.309212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.309239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.309325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.309352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.309444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.309470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.309550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.309576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.309666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.309694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.309777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.309803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.309881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.309907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.309996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.310022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.310154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.310181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.310262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.310289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.310376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.310404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.310483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.310509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.310597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.310622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.310702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.310728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.310854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.310879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.311004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.311029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.311108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.311133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.311219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.311249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.311340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.311366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.311451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.311481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.311576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.311606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.311693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.311721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.311806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.311832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.311917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.311944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.312024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.312049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.312136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.312173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.312253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.312279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.312362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.312388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.312471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.312499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.312589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.312616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.312694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.312720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.312804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.312830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.769 [2024-07-15 21:47:28.312961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.769 [2024-07-15 21:47:28.312987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.769 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.313071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.313098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.313192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.313219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.313300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.313325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.313409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.313435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.313525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.313554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.313653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.313681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.313781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.313811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.313895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.313923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.314011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.314037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.314124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.314158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.314250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.314278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.314357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.314388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.314480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.314507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.314595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.314622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.314703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.314730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.314819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.314847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.314934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.314961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.315040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.315065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.315157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.315185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.315267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.315293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.315390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.315419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.315512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.315538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.315617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.315645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.315725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.315751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.315841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.315867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.315953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.315979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.316064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.316091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.316185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.316212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.316300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.316327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.316405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.770 [2024-07-15 21:47:28.316431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.770 qpair failed and we were unable to recover it. 00:25:37.770 [2024-07-15 21:47:28.316516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.316542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.316629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.316657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.316743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.316771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.316865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.316893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.316976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.317001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.317086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.317112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.317206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.317233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.317312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.317337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.317424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.317449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.317536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.317561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.317648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.317676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.317754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.317780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.317866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.317894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.318025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.318051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.318150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.318179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.318257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.318284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.318373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.318401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.318482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.318510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.318600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.318628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.318718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.318745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.318835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.318861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.318948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.318979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.319062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.319089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.319180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.319208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.319297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.319324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.319408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.319436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.319519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.319545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.319634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.319661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.319743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.319769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.319849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.319876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.319962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.319988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.320063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.320089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.320173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.320199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.320285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.320311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.320395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.320421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.320511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.320538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.320622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.320649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.320728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.320754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.320843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.771 [2024-07-15 21:47:28.320870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.771 qpair failed and we were unable to recover it. 00:25:37.771 [2024-07-15 21:47:28.320956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.320984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.321090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.321116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.321200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.321228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.321319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.321345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.321430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.321455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.321530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.321555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.321660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.321687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.321770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.321796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.321882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.321909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.321988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.322015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.322106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.322135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.322226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.322251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.322339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.322364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.322447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.322473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.322552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.322580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.322664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.322689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.322772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.322798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.322874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.322900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.322985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.323012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.323099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.323126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.323223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.323250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.323336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.323363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.323448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.323479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.323568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.323594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.323670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.323696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.323782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.323808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.323893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.323921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.324012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.324039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.324125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.324163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.324242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.324268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.324356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.324381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.324462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.324489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.324575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.324601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.324680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.324708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.324792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.324819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.324899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.324928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.325023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.325050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.325134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.325169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.325258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.325285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.325366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.325392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.325468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.772 [2024-07-15 21:47:28.325493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.772 qpair failed and we were unable to recover it. 00:25:37.772 [2024-07-15 21:47:28.325575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.325601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.325684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.325710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.325797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.325825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.325914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.325942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.326033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.326061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.326147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.326173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.326251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.326277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.326362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.326389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.326475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.326507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.326595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.326622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.326703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.326730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.326808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.326833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.326919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.326944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.327023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.327049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.327127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.327161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.327248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.327277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.327355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.327381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.327460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.327485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.327578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.327605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.327687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.327713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.327795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.327821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.327901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.327926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.328010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.328035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.328108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.328133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.328226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.328253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.328336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.328361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.328442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.328469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.328552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.328577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.328662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.328688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.328771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.328799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.328884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.328910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.328991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.329018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.329097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.329123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.329215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.329244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.329325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.329351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.329436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.329464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.329549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.329574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.329659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.329688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.329770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.329796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.329879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.329905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.329987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.773 [2024-07-15 21:47:28.330014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.773 qpair failed and we were unable to recover it. 00:25:37.773 [2024-07-15 21:47:28.330104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.330130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.330227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.330256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.330342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.330369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.330451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.330477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.330557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.330582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.330663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.330689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.330798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.330825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.330905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.330930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.331017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.331042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.331119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.331152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.331239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.331268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.331352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.331378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.331465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.331492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.331582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.331607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.331688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.331716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.331808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.331837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.331924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.331950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.332034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.332059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.332150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.332176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.332256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.332281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.332368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.332394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.332483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.332511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.332596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.332623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.332709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.332737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.332836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.332864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.332952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.332978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.333060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.333085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.333166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.333193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.333272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.333300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.333385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.333413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.333493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.333520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.333621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.333649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.333735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.333761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.333849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.333876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.333959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.774 [2024-07-15 21:47:28.333994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.774 qpair failed and we were unable to recover it. 00:25:37.774 [2024-07-15 21:47:28.334082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.334109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.334199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.334228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.334314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.334341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.334417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.334443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.334528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.334555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.334641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.334668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.334758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.334786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.334865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.334892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.334970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.334996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.335080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.335106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.335200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.335227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.335309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.335334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.335430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.335456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.335547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.335575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.335663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.335689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.335806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.335834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.335919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.335946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.336030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.336056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.336146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.336173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.336255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.336280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.336363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.336387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196190 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.336464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.336491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.336582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.336610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc090000b90 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.336702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.336731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.336822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.336848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.336934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.336959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc080000b90 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.337036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.337067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.337157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.337183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.337268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.337294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.337369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.337394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.337483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.337508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 [2024-07-15 21:47:28.337594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.337619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc088000b90 with addr=10.0.0.2, port=4420 00:25:37.775 qpair failed and we were unable to recover it. 00:25:37.775 A controller has encountered a failure and is being reset. 00:25:37.775 [2024-07-15 21:47:28.337748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.775 [2024-07-15 21:47:28.337784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a4170 with addr=10.0.0.2, port=4420 00:25:37.775 [2024-07-15 21:47:28.337811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a4170 is same with the state(5) to be set 00:25:37.775 [2024-07-15 21:47:28.337848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a4170 (9): Bad file descriptor 00:25:37.775 [2024-07-15 21:47:28.337878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:37.775 [2024-07-15 21:47:28.337900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:37.775 [2024-07-15 21:47:28.337926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:37.775 Unable to reset the controller. 00:25:37.775 [2024-07-15 21:47:28.365530] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.775 [2024-07-15 21:47:28.365583] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.775 [2024-07-15 21:47:28.365599] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.775 [2024-07-15 21:47:28.365612] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.775 [2024-07-15 21:47:28.365625] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.775 [2024-07-15 21:47:28.365956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:37.775 [2024-07-15 21:47:28.366005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:37.775 [2024-07-15 21:47:28.366009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:37.775 [2024-07-15 21:47:28.365983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:37.775 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:37.775 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:37.775 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:37.775 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:37.776 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:38.033 Malloc0 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:38.033 [2024-07-15 21:47:28.546162] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:38.033 [2024-07-15 21:47:28.574371] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.033 21:47:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 438116 00:25:38.960 Controller properly reset. 00:25:44.212 Initializing NVMe Controllers 00:25:44.212 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:44.212 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:44.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:44.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:44.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:44.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:44.212 Initialization complete. Launching workers. 00:25:44.212 Starting thread on core 1 00:25:44.212 Starting thread on core 2 00:25:44.212 Starting thread on core 3 00:25:44.212 Starting thread on core 0 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:44.212 00:25:44.212 real 0m10.734s 00:25:44.212 user 0m33.386s 00:25:44.212 sys 0m8.161s 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:44.212 ************************************ 00:25:44.212 END TEST nvmf_target_disconnect_tc2 00:25:44.212 ************************************ 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:44.212 rmmod nvme_tcp 00:25:44.212 rmmod nvme_fabrics 00:25:44.212 rmmod nvme_keyring 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 438515 ']' 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 438515 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 438515 ']' 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 438515 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 438515 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 438515' 00:25:44.212 killing process with pid 438515 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 438515 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 438515 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:44.212 21:47:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.127 21:47:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:46.127 00:25:46.127 real 0m15.121s 00:25:46.127 user 0m58.134s 00:25:46.127 sys 0m10.574s 00:25:46.127 21:47:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:46.127 21:47:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:46.127 ************************************ 00:25:46.127 END TEST nvmf_target_disconnect 00:25:46.127 ************************************ 00:25:46.127 21:47:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:46.127 21:47:36 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:25:46.127 21:47:36 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:46.127 21:47:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:46.127 21:47:36 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:25:46.127 00:25:46.127 real 19m42.722s 00:25:46.127 user 48m17.906s 00:25:46.127 sys 4m32.856s 00:25:46.127 21:47:36 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:46.127 21:47:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:46.127 ************************************ 00:25:46.127 END TEST nvmf_tcp 00:25:46.127 ************************************ 00:25:46.127 21:47:36 -- common/autotest_common.sh@1142 -- # return 0 00:25:46.127 21:47:36 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:25:46.127 21:47:36 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:46.127 21:47:36 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:46.127 21:47:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:46.127 21:47:36 -- common/autotest_common.sh@10 -- # set +x 00:25:46.127 ************************************ 00:25:46.127 START TEST spdkcli_nvmf_tcp 00:25:46.127 ************************************ 00:25:46.127 21:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:46.127 * Looking for test storage... 00:25:46.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:25:46.127 21:47:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:25:46.127 21:47:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:46.127 21:47:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:25:46.127 21:47:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:46.127 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:25:46.127 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.127 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.127 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.127 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.127 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.127 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.127 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.127 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.127 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.127 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.127 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:46.127 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:25:46.127 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.127 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.127 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:46.127 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:46.127 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=439457 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 439457 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 439457 ']' 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:46.385 21:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.386 21:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:46.386 21:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:46.386 [2024-07-15 21:47:36.980100] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:25:46.386 [2024-07-15 21:47:36.980216] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid439457 ] 00:25:46.386 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.386 [2024-07-15 21:47:37.043571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:46.386 [2024-07-15 21:47:37.167167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.386 [2024-07-15 21:47:37.167186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.643 21:47:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:46.643 21:47:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:25:46.643 21:47:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:46.643 21:47:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:46.643 21:47:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:46.643 21:47:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:46.643 21:47:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:46.643 21:47:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:46.643 21:47:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:46.643 21:47:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:46.643 21:47:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:46.643 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:46.643 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:46.643 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:46.643 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:46.643 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:46.643 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:46.643 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:46.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:46.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:46.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:46.643 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:46.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:46.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:46.643 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:46.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:46.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:46.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:46.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:46.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:46.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:46.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:46.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:46.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:46.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:46.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:46.643 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:46.643 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:46.643 ' 00:25:49.166 [2024-07-15 21:47:39.848797] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.535 [2024-07-15 21:47:41.072845] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:53.112 [2024-07-15 21:47:43.363592] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:55.009 [2024-07-15 21:47:45.297378] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:56.379 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:56.379 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:56.379 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:56.379 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:56.379 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:56.379 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:56.379 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:56.379 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:56.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:56.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:56.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:56.379 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:56.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:56.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:56.379 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:56.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:56.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:56.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:56.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:56.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:56.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:56.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:56.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:56.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:56.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:56.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:56.379 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:56.379 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:56.379 21:47:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:56.379 21:47:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:56.379 21:47:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:56.379 21:47:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:56.379 21:47:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:56.379 21:47:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:56.379 21:47:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:56.379 21:47:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:56.636 21:47:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:56.636 21:47:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:56.636 21:47:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:56.636 21:47:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:56.636 21:47:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:56.636 21:47:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:56.636 21:47:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:56.636 21:47:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:56.636 21:47:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:56.636 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:56.636 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:56.636 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:56.636 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:56.637 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:56.637 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:56.637 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:56.637 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:56.637 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:56.637 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:56.637 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:56.637 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:56.637 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:56.637 ' 00:26:01.889 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:01.889 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:01.889 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:01.889 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:01.889 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:01.889 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:01.889 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:01.889 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:01.889 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:01.889 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:01.889 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:01.889 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:01.889 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:01.889 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:01.889 21:47:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:01.889 21:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:01.889 21:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:01.889 21:47:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 439457 00:26:01.889 21:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 439457 ']' 00:26:01.889 21:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 439457 00:26:01.889 21:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:26:01.889 21:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:01.889 21:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 439457 00:26:01.889 21:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:01.889 21:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:01.889 21:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 439457' 00:26:01.889 killing process with pid 439457 00:26:01.889 21:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 439457 00:26:01.889 21:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 439457 00:26:02.147 21:47:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:02.147 21:47:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:02.147 21:47:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 439457 ']' 00:26:02.147 21:47:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 439457 00:26:02.147 21:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 439457 ']' 00:26:02.147 21:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 439457 00:26:02.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (439457) - No such process 00:26:02.147 21:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 439457 is not found' 00:26:02.147 Process with pid 439457 is not found 00:26:02.147 21:47:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:02.147 21:47:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:02.147 21:47:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:02.147 00:26:02.147 real 0m15.981s 00:26:02.147 user 0m33.878s 00:26:02.147 sys 0m0.753s 00:26:02.147 21:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:02.147 21:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:02.147 ************************************ 00:26:02.147 END TEST spdkcli_nvmf_tcp 00:26:02.147 ************************************ 00:26:02.147 21:47:52 -- common/autotest_common.sh@1142 -- # return 0 00:26:02.147 21:47:52 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:02.147 21:47:52 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:02.147 21:47:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:02.147 21:47:52 -- common/autotest_common.sh@10 -- # set +x 00:26:02.147 ************************************ 00:26:02.147 START TEST nvmf_identify_passthru 00:26:02.147 ************************************ 00:26:02.147 21:47:52 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:02.147 * Looking for test storage... 00:26:02.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:02.147 21:47:52 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:02.147 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:26:02.147 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.147 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.147 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.147 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.147 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.147 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.147 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.147 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.147 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.405 21:47:52 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.405 21:47:52 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.405 21:47:52 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.405 21:47:52 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.405 21:47:52 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.405 21:47:52 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.405 21:47:52 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:02.405 21:47:52 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:02.405 21:47:52 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.405 21:47:52 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.405 21:47:52 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.405 21:47:52 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.405 21:47:52 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.405 21:47:52 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.405 21:47:52 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.405 21:47:52 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:02.405 21:47:52 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.405 21:47:52 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.405 21:47:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:02.405 21:47:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:02.405 21:47:52 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:26:02.405 21:47:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:04.312 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:04.312 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:26:04.312 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:04.312 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:04.312 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:04.312 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:04.312 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:04.312 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:26:04.312 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:04.312 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:26:04.312 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:26:04.312 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:26:04.312 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:26:04.312 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:26:04.312 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:26:04.313 Found 0000:08:00.0 (0x8086 - 0x159b) 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:26:04.313 Found 0000:08:00.1 (0x8086 - 0x159b) 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:26:04.313 Found net devices under 0000:08:00.0: cvl_0_0 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:26:04.313 Found net devices under 0000:08:00.1: cvl_0_1 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:04.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:04.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:26:04.313 00:26:04.313 --- 10.0.0.2 ping statistics --- 00:26:04.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.313 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:04.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:04.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:26:04.313 00:26:04.313 --- 10.0.0.1 ping statistics --- 00:26:04.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.313 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:04.313 21:47:54 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:04.313 21:47:54 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:04.313 21:47:54 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:04.313 21:47:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:04.313 21:47:54 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:04.313 21:47:54 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:26:04.313 21:47:54 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:26:04.313 21:47:54 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:26:04.313 21:47:54 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:26:04.313 21:47:54 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:26:04.313 21:47:54 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:26:04.313 21:47:54 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:04.313 21:47:54 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:04.313 21:47:54 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:26:04.313 21:47:54 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:26:04.313 21:47:54 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:84:00.0 00:26:04.313 21:47:54 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:84:00.0 00:26:04.313 21:47:54 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:84:00.0 00:26:04.313 21:47:54 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:84:00.0 ']' 00:26:04.313 21:47:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' -i 0 00:26:04.313 21:47:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:04.313 21:47:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:04.313 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.496 21:47:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ8275016S1P0FGN 00:26:08.497 21:47:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' -i 0 00:26:08.497 21:47:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:26:08.497 21:47:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:26:08.497 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.675 21:48:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:26:12.675 21:48:03 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:26:12.675 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:12.675 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:12.675 21:48:03 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:26:12.675 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:12.675 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:12.675 21:48:03 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=443124 00:26:12.675 21:48:03 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:12.675 21:48:03 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:12.675 21:48:03 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 443124 00:26:12.675 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 443124 ']' 00:26:12.675 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.675 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:12.675 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.675 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:12.675 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:12.675 [2024-07-15 21:48:03.267910] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:26:12.675 [2024-07-15 21:48:03.267986] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.675 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.675 [2024-07-15 21:48:03.318715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:12.675 [2024-07-15 21:48:03.412645] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:12.675 [2024-07-15 21:48:03.412691] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:12.675 [2024-07-15 21:48:03.412704] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:12.675 [2024-07-15 21:48:03.412714] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:12.675 [2024-07-15 21:48:03.412724] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:12.675 [2024-07-15 21:48:03.412790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.675 [2024-07-15 21:48:03.412879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:12.675 [2024-07-15 21:48:03.412987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:12.675 [2024-07-15 21:48:03.412990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.932 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:12.932 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:26:12.932 21:48:03 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:26:12.932 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.932 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:12.932 INFO: Log level set to 20 00:26:12.932 INFO: Requests: 00:26:12.932 { 00:26:12.932 "jsonrpc": "2.0", 00:26:12.932 "method": "nvmf_set_config", 00:26:12.932 "id": 1, 00:26:12.932 "params": { 00:26:12.932 "admin_cmd_passthru": { 00:26:12.932 "identify_ctrlr": true 00:26:12.932 } 00:26:12.932 } 00:26:12.932 } 00:26:12.932 00:26:12.932 INFO: response: 00:26:12.932 { 00:26:12.932 "jsonrpc": "2.0", 00:26:12.932 "id": 1, 00:26:12.932 "result": true 00:26:12.932 } 00:26:12.932 00:26:12.932 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.932 21:48:03 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:26:12.932 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.932 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:12.932 INFO: Setting log level to 20 00:26:12.932 INFO: Setting log level to 20 00:26:12.932 INFO: Log level set to 20 00:26:12.932 INFO: Log level set to 20 00:26:12.932 INFO: Requests: 00:26:12.932 { 00:26:12.932 "jsonrpc": "2.0", 00:26:12.932 "method": "framework_start_init", 00:26:12.932 "id": 1 00:26:12.932 } 00:26:12.932 00:26:12.932 INFO: Requests: 00:26:12.932 { 00:26:12.932 "jsonrpc": "2.0", 00:26:12.932 "method": "framework_start_init", 00:26:12.932 "id": 1 00:26:12.932 } 00:26:12.932 00:26:12.932 [2024-07-15 21:48:03.607096] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:26:12.932 INFO: response: 00:26:12.932 { 00:26:12.932 "jsonrpc": "2.0", 00:26:12.932 "id": 1, 00:26:12.932 "result": true 00:26:12.932 } 00:26:12.932 00:26:12.932 INFO: response: 00:26:12.932 { 00:26:12.932 "jsonrpc": "2.0", 00:26:12.932 "id": 1, 00:26:12.932 "result": true 00:26:12.932 } 00:26:12.932 00:26:12.932 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.932 21:48:03 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:12.932 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.932 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:12.932 INFO: Setting log level to 40 00:26:12.932 INFO: Setting log level to 40 00:26:12.932 INFO: Setting log level to 40 00:26:12.932 [2024-07-15 21:48:03.616812] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:12.932 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.932 21:48:03 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:26:12.932 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:12.932 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:12.932 21:48:03 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:84:00.0 00:26:12.932 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.932 21:48:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:16.211 Nvme0n1 00:26:16.211 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.211 21:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:26:16.211 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.211 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:16.211 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.211 21:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:16.212 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.212 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:16.212 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.212 21:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:16.212 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.212 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:16.212 [2024-07-15 21:48:06.480564] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:16.212 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.212 21:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:26:16.212 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.212 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:16.212 [ 00:26:16.212 { 00:26:16.212 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:16.212 "subtype": "Discovery", 00:26:16.212 "listen_addresses": [], 00:26:16.212 "allow_any_host": true, 00:26:16.212 "hosts": [] 00:26:16.212 }, 00:26:16.212 { 00:26:16.212 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:16.212 "subtype": "NVMe", 00:26:16.212 "listen_addresses": [ 00:26:16.212 { 00:26:16.212 "trtype": "TCP", 00:26:16.212 "adrfam": "IPv4", 00:26:16.212 "traddr": "10.0.0.2", 00:26:16.212 "trsvcid": "4420" 00:26:16.212 } 00:26:16.212 ], 00:26:16.212 "allow_any_host": true, 00:26:16.212 "hosts": [], 00:26:16.212 "serial_number": "SPDK00000000000001", 00:26:16.212 "model_number": "SPDK bdev Controller", 00:26:16.212 "max_namespaces": 1, 00:26:16.212 "min_cntlid": 1, 00:26:16.212 "max_cntlid": 65519, 00:26:16.212 "namespaces": [ 00:26:16.212 { 00:26:16.212 "nsid": 1, 00:26:16.212 "bdev_name": "Nvme0n1", 00:26:16.212 "name": "Nvme0n1", 00:26:16.212 "nguid": "8442E87EDA2A462D961685BF8EBB12DA", 00:26:16.212 "uuid": "8442e87e-da2a-462d-9616-85bf8ebb12da" 00:26:16.212 } 00:26:16.212 ] 00:26:16.212 } 00:26:16.212 ] 00:26:16.212 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.212 21:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:16.212 21:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:26:16.212 21:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:26:16.212 EAL: No free 2048 kB hugepages reported on node 1 00:26:16.212 21:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ8275016S1P0FGN 00:26:16.212 21:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:16.212 21:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:26:16.212 21:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:26:16.212 EAL: No free 2048 kB hugepages reported on node 1 00:26:16.212 21:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:26:16.212 21:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ8275016S1P0FGN '!=' PHLJ8275016S1P0FGN ']' 00:26:16.212 21:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:26:16.212 21:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:16.212 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.212 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:16.212 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.212 21:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:26:16.212 21:48:06 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:26:16.212 21:48:06 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:16.212 21:48:06 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:26:16.212 21:48:06 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:16.212 21:48:06 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:26:16.212 21:48:06 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:16.212 21:48:06 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:16.212 rmmod nvme_tcp 00:26:16.212 rmmod nvme_fabrics 00:26:16.212 rmmod nvme_keyring 00:26:16.212 21:48:06 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:16.212 21:48:06 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:26:16.212 21:48:06 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:26:16.212 21:48:06 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 443124 ']' 00:26:16.212 21:48:06 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 443124 00:26:16.212 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 443124 ']' 00:26:16.212 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 443124 00:26:16.212 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:26:16.212 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:16.212 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 443124 00:26:16.212 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:16.212 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:16.212 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 443124' 00:26:16.212 killing process with pid 443124 00:26:16.212 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 443124 00:26:16.212 21:48:06 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 443124 00:26:18.111 21:48:08 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:18.111 21:48:08 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:18.111 21:48:08 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:18.111 21:48:08 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:18.111 21:48:08 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:18.111 21:48:08 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.111 21:48:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:18.111 21:48:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.017 21:48:10 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:20.017 00:26:20.017 real 0m17.648s 00:26:20.017 user 0m26.674s 00:26:20.017 sys 0m2.057s 00:26:20.017 21:48:10 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:20.017 21:48:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:20.017 ************************************ 00:26:20.017 END TEST nvmf_identify_passthru 00:26:20.017 ************************************ 00:26:20.017 21:48:10 -- common/autotest_common.sh@1142 -- # return 0 00:26:20.017 21:48:10 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:20.017 21:48:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:20.017 21:48:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:20.017 21:48:10 -- common/autotest_common.sh@10 -- # set +x 00:26:20.017 ************************************ 00:26:20.017 START TEST nvmf_dif 00:26:20.017 ************************************ 00:26:20.017 21:48:10 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:20.017 * Looking for test storage... 00:26:20.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:20.017 21:48:10 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:20.017 21:48:10 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:20.017 21:48:10 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:20.017 21:48:10 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:20.017 21:48:10 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.017 21:48:10 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.017 21:48:10 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.017 21:48:10 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:26:20.017 21:48:10 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:20.017 21:48:10 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:20.017 21:48:10 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:26:20.017 21:48:10 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:20.017 21:48:10 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:20.018 21:48:10 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:26:20.018 21:48:10 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:26:20.018 21:48:10 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:20.018 21:48:10 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.018 21:48:10 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:20.018 21:48:10 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:20.018 21:48:10 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:20.018 21:48:10 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.018 21:48:10 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:20.018 21:48:10 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.018 21:48:10 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:20.018 21:48:10 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:20.018 21:48:10 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:26:20.018 21:48:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:21.920 21:48:12 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:21.920 21:48:12 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:26:21.920 21:48:12 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:21.920 21:48:12 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:21.920 21:48:12 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:21.920 21:48:12 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:21.920 21:48:12 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:21.920 21:48:12 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:26:21.920 21:48:12 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:21.920 21:48:12 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:26:21.920 21:48:12 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:26:21.920 21:48:12 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:26:21.920 21:48:12 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:26:21.920 21:48:12 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:26:21.920 21:48:12 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:26:21.920 21:48:12 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:21.920 21:48:12 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:21.920 21:48:12 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:26:21.921 Found 0000:08:00.0 (0x8086 - 0x159b) 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:26:21.921 Found 0000:08:00.1 (0x8086 - 0x159b) 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:26:21.921 Found net devices under 0000:08:00.0: cvl_0_0 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:26:21.921 Found net devices under 0000:08:00.1: cvl_0_1 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:21.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:21.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:26:21.921 00:26:21.921 --- 10.0.0.2 ping statistics --- 00:26:21.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.921 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:21.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:21.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:26:21.921 00:26:21.921 --- 10.0.0.1 ping statistics --- 00:26:21.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.921 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:21.921 21:48:12 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:22.858 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:26:22.858 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:22.858 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:26:22.858 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:26:22.858 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:26:22.858 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:26:22.858 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:26:22.858 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:26:22.858 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:26:22.858 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:26:22.858 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:26:22.858 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:26:22.858 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:26:22.858 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:26:22.858 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:26:22.858 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:26:22.858 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:26:22.858 21:48:13 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:22.858 21:48:13 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:22.858 21:48:13 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:22.858 21:48:13 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:22.858 21:48:13 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:22.858 21:48:13 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:22.858 21:48:13 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:26:22.858 21:48:13 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:26:22.858 21:48:13 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:22.858 21:48:13 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:22.858 21:48:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:22.858 21:48:13 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=446077 00:26:22.858 21:48:13 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:22.858 21:48:13 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 446077 00:26:22.858 21:48:13 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 446077 ']' 00:26:22.858 21:48:13 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.858 21:48:13 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:22.858 21:48:13 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.858 21:48:13 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:22.858 21:48:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:22.858 [2024-07-15 21:48:13.523236] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:26:22.858 [2024-07-15 21:48:13.523346] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.858 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.858 [2024-07-15 21:48:13.588790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.115 [2024-07-15 21:48:13.704540] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:23.115 [2024-07-15 21:48:13.704597] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:23.115 [2024-07-15 21:48:13.704613] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:23.115 [2024-07-15 21:48:13.704628] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:23.115 [2024-07-15 21:48:13.704639] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:23.115 [2024-07-15 21:48:13.704668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.115 21:48:13 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:23.115 21:48:13 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:26:23.115 21:48:13 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:23.115 21:48:13 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:23.115 21:48:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:23.115 21:48:13 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:23.115 21:48:13 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:26:23.115 21:48:13 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:26:23.115 21:48:13 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.115 21:48:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:23.115 [2024-07-15 21:48:13.830706] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:23.115 21:48:13 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.115 21:48:13 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:26:23.115 21:48:13 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:23.115 21:48:13 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:23.115 21:48:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:23.115 ************************************ 00:26:23.115 START TEST fio_dif_1_default 00:26:23.115 ************************************ 00:26:23.115 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:23.116 bdev_null0 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:23.116 [2024-07-15 21:48:13.886964] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:23.116 { 00:26:23.116 "params": { 00:26:23.116 "name": "Nvme$subsystem", 00:26:23.116 "trtype": "$TEST_TRANSPORT", 00:26:23.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:23.116 "adrfam": "ipv4", 00:26:23.116 "trsvcid": "$NVMF_PORT", 00:26:23.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:23.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:23.116 "hdgst": ${hdgst:-false}, 00:26:23.116 "ddgst": ${ddgst:-false} 00:26:23.116 }, 00:26:23.116 "method": "bdev_nvme_attach_controller" 00:26:23.116 } 00:26:23.116 EOF 00:26:23.116 )") 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:26:23.116 21:48:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:23.116 "params": { 00:26:23.116 "name": "Nvme0", 00:26:23.116 "trtype": "tcp", 00:26:23.116 "traddr": "10.0.0.2", 00:26:23.116 "adrfam": "ipv4", 00:26:23.116 "trsvcid": "4420", 00:26:23.116 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.116 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:23.116 "hdgst": false, 00:26:23.116 "ddgst": false 00:26:23.116 }, 00:26:23.116 "method": "bdev_nvme_attach_controller" 00:26:23.116 }' 00:26:23.373 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:23.373 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:23.373 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:23.373 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:23.373 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:23.373 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:23.373 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:23.373 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:23.373 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:23.373 21:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:23.373 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:23.373 fio-3.35 00:26:23.373 Starting 1 thread 00:26:23.630 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.878 00:26:35.878 filename0: (groupid=0, jobs=1): err= 0: pid=446255: Mon Jul 15 21:48:24 2024 00:26:35.878 read: IOPS=98, BW=393KiB/s (403kB/s)(3936KiB/10013msec) 00:26:35.878 slat (nsec): min=6655, max=73346, avg=9223.90, stdev=3500.57 00:26:35.878 clat (usec): min=495, max=47090, avg=40670.54, stdev=3652.67 00:26:35.878 lat (usec): min=503, max=47128, avg=40679.76, stdev=3652.03 00:26:35.878 clat percentiles (usec): 00:26:35.878 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:35.878 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:35.878 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:35.878 | 99.00th=[41157], 99.50th=[41157], 99.90th=[46924], 99.95th=[46924], 00:26:35.878 | 99.99th=[46924] 00:26:35.878 bw ( KiB/s): min= 384, max= 416, per=99.72%, avg=392.00, stdev=14.22, samples=20 00:26:35.878 iops : min= 96, max= 104, avg=98.00, stdev= 3.55, samples=20 00:26:35.878 lat (usec) : 500=0.10%, 750=0.71% 00:26:35.878 lat (msec) : 50=99.19% 00:26:35.878 cpu : usr=90.51%, sys=9.23%, ctx=14, majf=0, minf=257 00:26:35.878 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:35.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.878 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.878 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:35.878 00:26:35.878 Run status group 0 (all jobs): 00:26:35.878 READ: bw=393KiB/s (403kB/s), 393KiB/s-393KiB/s (403kB/s-403kB/s), io=3936KiB (4030kB), run=10013-10013msec 00:26:35.878 21:48:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:35.878 21:48:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:35.878 21:48:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:35.878 21:48:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:35.878 21:48:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:35.878 21:48:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:35.878 21:48:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.878 21:48:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:35.878 21:48:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.878 21:48:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:35.878 21:48:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.878 21:48:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:35.878 21:48:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.878 00:26:35.878 real 0m11.157s 00:26:35.878 user 0m10.057s 00:26:35.878 sys 0m1.145s 00:26:35.878 21:48:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:35.878 21:48:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:35.878 ************************************ 00:26:35.878 END TEST fio_dif_1_default 00:26:35.878 ************************************ 00:26:35.878 21:48:25 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:35.878 21:48:25 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:35.878 21:48:25 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:35.878 21:48:25 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:35.878 21:48:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:35.878 ************************************ 00:26:35.878 START TEST fio_dif_1_multi_subsystems 00:26:35.878 ************************************ 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:35.879 bdev_null0 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:35.879 [2024-07-15 21:48:25.080317] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:35.879 bdev_null1 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:35.879 { 00:26:35.879 "params": { 00:26:35.879 "name": "Nvme$subsystem", 00:26:35.879 "trtype": "$TEST_TRANSPORT", 00:26:35.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.879 "adrfam": "ipv4", 00:26:35.879 "trsvcid": "$NVMF_PORT", 00:26:35.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.879 "hdgst": ${hdgst:-false}, 00:26:35.879 "ddgst": ${ddgst:-false} 00:26:35.879 }, 00:26:35.879 "method": "bdev_nvme_attach_controller" 00:26:35.879 } 00:26:35.879 EOF 00:26:35.879 )") 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:35.879 { 00:26:35.879 "params": { 00:26:35.879 "name": "Nvme$subsystem", 00:26:35.879 "trtype": "$TEST_TRANSPORT", 00:26:35.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.879 "adrfam": "ipv4", 00:26:35.879 "trsvcid": "$NVMF_PORT", 00:26:35.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.879 "hdgst": ${hdgst:-false}, 00:26:35.879 "ddgst": ${ddgst:-false} 00:26:35.879 }, 00:26:35.879 "method": "bdev_nvme_attach_controller" 00:26:35.879 } 00:26:35.879 EOF 00:26:35.879 )") 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:35.879 "params": { 00:26:35.879 "name": "Nvme0", 00:26:35.879 "trtype": "tcp", 00:26:35.879 "traddr": "10.0.0.2", 00:26:35.879 "adrfam": "ipv4", 00:26:35.879 "trsvcid": "4420", 00:26:35.879 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:35.879 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:35.879 "hdgst": false, 00:26:35.879 "ddgst": false 00:26:35.879 }, 00:26:35.879 "method": "bdev_nvme_attach_controller" 00:26:35.879 },{ 00:26:35.879 "params": { 00:26:35.879 "name": "Nvme1", 00:26:35.879 "trtype": "tcp", 00:26:35.879 "traddr": "10.0.0.2", 00:26:35.879 "adrfam": "ipv4", 00:26:35.879 "trsvcid": "4420", 00:26:35.879 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:35.879 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:35.879 "hdgst": false, 00:26:35.879 "ddgst": false 00:26:35.879 }, 00:26:35.879 "method": "bdev_nvme_attach_controller" 00:26:35.879 }' 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:35.879 21:48:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:35.879 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:35.879 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:35.879 fio-3.35 00:26:35.879 Starting 2 threads 00:26:35.879 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.843 00:26:45.843 filename0: (groupid=0, jobs=1): err= 0: pid=447332: Mon Jul 15 21:48:36 2024 00:26:45.843 read: IOPS=189, BW=759KiB/s (777kB/s)(7600KiB/10012msec) 00:26:45.843 slat (nsec): min=7036, max=62846, avg=9518.47, stdev=2230.04 00:26:45.843 clat (usec): min=517, max=47618, avg=21048.91, stdev=20355.45 00:26:45.843 lat (usec): min=526, max=47634, avg=21058.43, stdev=20355.26 00:26:45.843 clat percentiles (usec): 00:26:45.843 | 1.00th=[ 529], 5.00th=[ 553], 10.00th=[ 578], 20.00th=[ 594], 00:26:45.843 | 30.00th=[ 644], 40.00th=[ 734], 50.00th=[40633], 60.00th=[41157], 00:26:45.843 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:45.843 | 99.00th=[42206], 99.50th=[42206], 99.90th=[47449], 99.95th=[47449], 00:26:45.843 | 99.99th=[47449] 00:26:45.843 bw ( KiB/s): min= 640, max= 832, per=66.00%, avg=758.40, stdev=42.93, samples=20 00:26:45.843 iops : min= 160, max= 208, avg=189.60, stdev=10.73, samples=20 00:26:45.843 lat (usec) : 750=44.68%, 1000=5.00% 00:26:45.843 lat (msec) : 10=0.21%, 50=50.11% 00:26:45.843 cpu : usr=94.18%, sys=5.56%, ctx=15, majf=0, minf=188 00:26:45.843 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:45.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.843 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.843 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:45.843 filename1: (groupid=0, jobs=1): err= 0: pid=447333: Mon Jul 15 21:48:36 2024 00:26:45.843 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10016msec) 00:26:45.843 slat (nsec): min=6620, max=60356, avg=10133.91, stdev=4267.15 00:26:45.843 clat (usec): min=40658, max=49652, avg=41017.59, stdev=564.48 00:26:45.843 lat (usec): min=40667, max=49668, avg=41027.72, stdev=564.85 00:26:45.843 clat percentiles (usec): 00:26:45.843 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:45.843 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:45.843 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:45.843 | 99.00th=[42206], 99.50th=[42206], 99.90th=[49546], 99.95th=[49546], 00:26:45.843 | 99.99th=[49546] 00:26:45.843 bw ( KiB/s): min= 352, max= 416, per=33.78%, avg=388.80, stdev=15.66, samples=20 00:26:45.843 iops : min= 88, max= 104, avg=97.20, stdev= 3.91, samples=20 00:26:45.843 lat (msec) : 50=100.00% 00:26:45.843 cpu : usr=94.51%, sys=5.22%, ctx=23, majf=0, minf=118 00:26:45.843 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:45.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.844 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.844 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:45.844 00:26:45.844 Run status group 0 (all jobs): 00:26:45.844 READ: bw=1149KiB/s (1176kB/s), 390KiB/s-759KiB/s (399kB/s-777kB/s), io=11.2MiB (11.8MB), run=10012-10016msec 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.844 00:26:45.844 real 0m11.359s 00:26:45.844 user 0m20.033s 00:26:45.844 sys 0m1.334s 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:45.844 21:48:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:45.844 ************************************ 00:26:45.844 END TEST fio_dif_1_multi_subsystems 00:26:45.844 ************************************ 00:26:45.844 21:48:36 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:45.844 21:48:36 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:45.844 21:48:36 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:45.844 21:48:36 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:45.844 21:48:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:45.844 ************************************ 00:26:45.844 START TEST fio_dif_rand_params 00:26:45.844 ************************************ 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:45.844 bdev_null0 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:45.844 [2024-07-15 21:48:36.488152] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.844 { 00:26:45.844 "params": { 00:26:45.844 "name": "Nvme$subsystem", 00:26:45.844 "trtype": "$TEST_TRANSPORT", 00:26:45.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.844 "adrfam": "ipv4", 00:26:45.844 "trsvcid": "$NVMF_PORT", 00:26:45.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.844 "hdgst": ${hdgst:-false}, 00:26:45.844 "ddgst": ${ddgst:-false} 00:26:45.844 }, 00:26:45.844 "method": "bdev_nvme_attach_controller" 00:26:45.844 } 00:26:45.844 EOF 00:26:45.844 )") 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:45.844 "params": { 00:26:45.844 "name": "Nvme0", 00:26:45.844 "trtype": "tcp", 00:26:45.844 "traddr": "10.0.0.2", 00:26:45.844 "adrfam": "ipv4", 00:26:45.844 "trsvcid": "4420", 00:26:45.844 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:45.844 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:45.844 "hdgst": false, 00:26:45.844 "ddgst": false 00:26:45.844 }, 00:26:45.844 "method": "bdev_nvme_attach_controller" 00:26:45.844 }' 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:45.844 21:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:46.102 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:46.102 ... 00:26:46.102 fio-3.35 00:26:46.102 Starting 3 threads 00:26:46.102 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.667 00:26:52.667 filename0: (groupid=0, jobs=1): err= 0: pid=448491: Mon Jul 15 21:48:42 2024 00:26:52.667 read: IOPS=165, BW=20.7MiB/s (21.7MB/s)(104MiB/5024msec) 00:26:52.667 slat (nsec): min=8073, max=46769, avg=15666.52, stdev=6137.44 00:26:52.667 clat (usec): min=3909, max=92189, avg=18113.23, stdev=17251.43 00:26:52.667 lat (usec): min=3921, max=92212, avg=18128.90, stdev=17252.38 00:26:52.667 clat percentiles (usec): 00:26:52.667 | 1.00th=[ 4555], 5.00th=[ 4948], 10.00th=[ 5211], 20.00th=[ 7963], 00:26:52.667 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[10683], 60.00th=[11994], 00:26:52.667 | 70.00th=[12780], 80.00th=[46400], 90.00th=[51119], 95.00th=[52691], 00:26:52.667 | 99.00th=[53740], 99.50th=[54264], 99.90th=[91751], 99.95th=[91751], 00:26:52.667 | 99.99th=[91751] 00:26:52.667 bw ( KiB/s): min=13824, max=35840, per=26.73%, avg=21196.80, stdev=6235.53, samples=10 00:26:52.667 iops : min= 108, max= 280, avg=165.60, stdev=48.72, samples=10 00:26:52.667 lat (msec) : 4=0.12%, 10=44.28%, 20=34.90%, 50=7.58%, 100=13.12% 00:26:52.667 cpu : usr=94.55%, sys=4.78%, ctx=205, majf=0, minf=117 00:26:52.667 IO depths : 1=4.9%, 2=95.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:52.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.667 issued rwts: total=831,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.667 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:52.667 filename0: (groupid=0, jobs=1): err= 0: pid=448492: Mon Jul 15 21:48:42 2024 00:26:52.667 read: IOPS=253, BW=31.7MiB/s (33.3MB/s)(160MiB/5028msec) 00:26:52.667 slat (nsec): min=7680, max=36684, avg=12943.36, stdev=3005.51 00:26:52.667 clat (usec): min=4596, max=53903, avg=11794.44, stdev=10142.85 00:26:52.667 lat (usec): min=4609, max=53920, avg=11807.39, stdev=10142.88 00:26:52.667 clat percentiles (usec): 00:26:52.667 | 1.00th=[ 5014], 5.00th=[ 5342], 10.00th=[ 5800], 20.00th=[ 7504], 00:26:52.667 | 30.00th=[ 8094], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9503], 00:26:52.667 | 70.00th=[10945], 80.00th=[12256], 90.00th=[13435], 95.00th=[47449], 00:26:52.667 | 99.00th=[51119], 99.50th=[52167], 99.90th=[53216], 99.95th=[53740], 00:26:52.667 | 99.99th=[53740] 00:26:52.667 bw ( KiB/s): min=19968, max=40960, per=41.14%, avg=32621.10, stdev=5594.95, samples=10 00:26:52.667 iops : min= 156, max= 320, avg=254.80, stdev=43.70, samples=10 00:26:52.667 lat (msec) : 10=63.19%, 20=30.23%, 50=4.62%, 100=1.96% 00:26:52.667 cpu : usr=93.75%, sys=5.81%, ctx=14, majf=0, minf=128 00:26:52.667 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:52.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.667 issued rwts: total=1277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.667 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:52.667 filename0: (groupid=0, jobs=1): err= 0: pid=448493: Mon Jul 15 21:48:42 2024 00:26:52.667 read: IOPS=201, BW=25.2MiB/s (26.4MB/s)(127MiB/5044msec) 00:26:52.667 slat (nsec): min=7819, max=28142, avg=12354.90, stdev=2457.21 00:26:52.667 clat (usec): min=4040, max=58854, avg=14822.23, stdev=13065.61 00:26:52.667 lat (usec): min=4050, max=58867, avg=14834.59, stdev=13065.68 00:26:52.667 clat percentiles (usec): 00:26:52.667 | 1.00th=[ 4621], 5.00th=[ 4817], 10.00th=[ 5211], 20.00th=[ 7767], 00:26:52.667 | 30.00th=[ 9110], 40.00th=[ 9896], 50.00th=[10683], 60.00th=[11731], 00:26:52.667 | 70.00th=[13435], 80.00th=[15270], 90.00th=[45876], 95.00th=[51119], 00:26:52.667 | 99.00th=[54789], 99.50th=[54789], 99.90th=[58983], 99.95th=[58983], 00:26:52.667 | 99.99th=[58983] 00:26:52.667 bw ( KiB/s): min=14848, max=39680, per=32.73%, avg=25958.40, stdev=6835.41, samples=10 00:26:52.667 iops : min= 116, max= 310, avg=202.80, stdev=53.40, samples=10 00:26:52.667 lat (msec) : 10=40.61%, 20=47.89%, 50=4.92%, 100=6.59% 00:26:52.667 cpu : usr=93.81%, sys=5.81%, ctx=7, majf=0, minf=56 00:26:52.667 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:52.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.667 issued rwts: total=1017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.667 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:52.667 00:26:52.667 Run status group 0 (all jobs): 00:26:52.667 READ: bw=77.4MiB/s (81.2MB/s), 20.7MiB/s-31.7MiB/s (21.7MB/s-33.3MB/s), io=391MiB (410MB), run=5024-5044msec 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.667 bdev_null0 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.667 [2024-07-15 21:48:42.654685] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.667 bdev_null1 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.667 bdev_null2 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:52.667 { 00:26:52.667 "params": { 00:26:52.667 "name": "Nvme$subsystem", 00:26:52.667 "trtype": "$TEST_TRANSPORT", 00:26:52.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.667 "adrfam": "ipv4", 00:26:52.667 "trsvcid": "$NVMF_PORT", 00:26:52.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.667 "hdgst": ${hdgst:-false}, 00:26:52.667 "ddgst": ${ddgst:-false} 00:26:52.667 }, 00:26:52.667 "method": "bdev_nvme_attach_controller" 00:26:52.667 } 00:26:52.667 EOF 00:26:52.667 )") 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:52.667 { 00:26:52.667 "params": { 00:26:52.667 "name": "Nvme$subsystem", 00:26:52.667 "trtype": "$TEST_TRANSPORT", 00:26:52.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.667 "adrfam": "ipv4", 00:26:52.667 "trsvcid": "$NVMF_PORT", 00:26:52.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.667 "hdgst": ${hdgst:-false}, 00:26:52.667 "ddgst": ${ddgst:-false} 00:26:52.667 }, 00:26:52.667 "method": "bdev_nvme_attach_controller" 00:26:52.667 } 00:26:52.667 EOF 00:26:52.667 )") 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:52.667 { 00:26:52.667 "params": { 00:26:52.667 "name": "Nvme$subsystem", 00:26:52.667 "trtype": "$TEST_TRANSPORT", 00:26:52.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:52.667 "adrfam": "ipv4", 00:26:52.667 "trsvcid": "$NVMF_PORT", 00:26:52.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:52.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:52.667 "hdgst": ${hdgst:-false}, 00:26:52.667 "ddgst": ${ddgst:-false} 00:26:52.667 }, 00:26:52.667 "method": "bdev_nvme_attach_controller" 00:26:52.667 } 00:26:52.667 EOF 00:26:52.667 )") 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:52.667 21:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:52.667 "params": { 00:26:52.667 "name": "Nvme0", 00:26:52.667 "trtype": "tcp", 00:26:52.667 "traddr": "10.0.0.2", 00:26:52.667 "adrfam": "ipv4", 00:26:52.667 "trsvcid": "4420", 00:26:52.667 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:52.667 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:52.667 "hdgst": false, 00:26:52.668 "ddgst": false 00:26:52.668 }, 00:26:52.668 "method": "bdev_nvme_attach_controller" 00:26:52.668 },{ 00:26:52.668 "params": { 00:26:52.668 "name": "Nvme1", 00:26:52.668 "trtype": "tcp", 00:26:52.668 "traddr": "10.0.0.2", 00:26:52.668 "adrfam": "ipv4", 00:26:52.668 "trsvcid": "4420", 00:26:52.668 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:52.668 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:52.668 "hdgst": false, 00:26:52.668 "ddgst": false 00:26:52.668 }, 00:26:52.668 "method": "bdev_nvme_attach_controller" 00:26:52.668 },{ 00:26:52.668 "params": { 00:26:52.668 "name": "Nvme2", 00:26:52.668 "trtype": "tcp", 00:26:52.668 "traddr": "10.0.0.2", 00:26:52.668 "adrfam": "ipv4", 00:26:52.668 "trsvcid": "4420", 00:26:52.668 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:52.668 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:52.668 "hdgst": false, 00:26:52.668 "ddgst": false 00:26:52.668 }, 00:26:52.668 "method": "bdev_nvme_attach_controller" 00:26:52.668 }' 00:26:52.668 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:52.668 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:52.668 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:52.668 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:52.668 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:52.668 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:52.668 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:52.668 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:52.668 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:52.668 21:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:52.668 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:52.668 ... 00:26:52.668 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:52.668 ... 00:26:52.668 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:52.668 ... 00:26:52.668 fio-3.35 00:26:52.668 Starting 24 threads 00:26:52.668 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.866 00:27:04.866 filename0: (groupid=0, jobs=1): err= 0: pid=449154: Mon Jul 15 21:48:54 2024 00:27:04.866 read: IOPS=59, BW=239KiB/s (245kB/s)(2432KiB/10167msec) 00:27:04.866 slat (usec): min=5, max=141, avg=52.10, stdev=37.46 00:27:04.866 clat (msec): min=82, max=414, avg=267.09, stdev=62.81 00:27:04.866 lat (msec): min=82, max=414, avg=267.14, stdev=62.84 00:27:04.866 clat percentiles (msec): 00:27:04.866 | 1.00th=[ 84], 5.00th=[ 171], 10.00th=[ 197], 20.00th=[ 213], 00:27:04.866 | 30.00th=[ 222], 40.00th=[ 224], 50.00th=[ 300], 60.00th=[ 313], 00:27:04.866 | 70.00th=[ 321], 80.00th=[ 321], 90.00th=[ 330], 95.00th=[ 338], 00:27:04.866 | 99.00th=[ 368], 99.50th=[ 401], 99.90th=[ 414], 99.95th=[ 414], 00:27:04.866 | 99.99th=[ 414] 00:27:04.866 bw ( KiB/s): min= 128, max= 368, per=4.55%, avg=236.75, stdev=61.10, samples=20 00:27:04.866 iops : min= 32, max= 92, avg=59.15, stdev=15.26, samples=20 00:27:04.866 lat (msec) : 100=2.63%, 250=43.09%, 500=54.28% 00:27:04.866 cpu : usr=98.39%, sys=1.22%, ctx=16, majf=0, minf=69 00:27:04.866 IO depths : 1=3.6%, 2=9.9%, 4=25.0%, 8=52.6%, 16=8.9%, 32=0.0%, >=64=0.0% 00:27:04.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.866 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.866 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.866 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.866 filename0: (groupid=0, jobs=1): err= 0: pid=449155: Mon Jul 15 21:48:54 2024 00:27:04.866 read: IOPS=59, BW=237KiB/s (243kB/s)(2408KiB/10161msec) 00:27:04.866 slat (usec): min=8, max=154, avg=76.45, stdev=28.34 00:27:04.866 clat (msec): min=56, max=457, avg=269.20, stdev=73.27 00:27:04.866 lat (msec): min=56, max=457, avg=269.27, stdev=73.28 00:27:04.866 clat percentiles (msec): 00:27:04.866 | 1.00th=[ 58], 5.00th=[ 128], 10.00th=[ 171], 20.00th=[ 205], 00:27:04.866 | 30.00th=[ 234], 40.00th=[ 262], 50.00th=[ 300], 60.00th=[ 313], 00:27:04.867 | 70.00th=[ 317], 80.00th=[ 321], 90.00th=[ 326], 95.00th=[ 334], 00:27:04.867 | 99.00th=[ 439], 99.50th=[ 447], 99.90th=[ 460], 99.95th=[ 460], 00:27:04.867 | 99.99th=[ 460] 00:27:04.867 bw ( KiB/s): min= 127, max= 432, per=4.51%, avg=234.30, stdev=72.23, samples=20 00:27:04.867 iops : min= 31, max= 108, avg=58.50, stdev=18.11, samples=20 00:27:04.867 lat (msec) : 100=3.82%, 250=35.71%, 500=60.47% 00:27:04.867 cpu : usr=98.37%, sys=1.18%, ctx=32, majf=0, minf=114 00:27:04.867 IO depths : 1=2.2%, 2=6.3%, 4=18.3%, 8=63.0%, 16=10.3%, 32=0.0%, >=64=0.0% 00:27:04.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.867 complete : 0=0.0%, 4=92.3%, 8=2.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.867 issued rwts: total=602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.867 filename0: (groupid=0, jobs=1): err= 0: pid=449156: Mon Jul 15 21:48:54 2024 00:27:04.867 read: IOPS=52, BW=208KiB/s (213kB/s)(2112KiB/10149msec) 00:27:04.867 slat (usec): min=8, max=138, avg=60.78, stdev=36.47 00:27:04.867 clat (msec): min=170, max=450, avg=307.02, stdev=46.42 00:27:04.867 lat (msec): min=170, max=450, avg=307.08, stdev=46.43 00:27:04.867 clat percentiles (msec): 00:27:04.867 | 1.00th=[ 171], 5.00th=[ 192], 10.00th=[ 236], 20.00th=[ 288], 00:27:04.867 | 30.00th=[ 313], 40.00th=[ 313], 50.00th=[ 321], 60.00th=[ 321], 00:27:04.867 | 70.00th=[ 326], 80.00th=[ 330], 90.00th=[ 342], 95.00th=[ 363], 00:27:04.867 | 99.00th=[ 418], 99.50th=[ 418], 99.90th=[ 451], 99.95th=[ 451], 00:27:04.867 | 99.99th=[ 451] 00:27:04.867 bw ( KiB/s): min= 128, max= 256, per=3.93%, avg=204.80, stdev=61.33, samples=20 00:27:04.867 iops : min= 32, max= 64, avg=51.20, stdev=15.33, samples=20 00:27:04.867 lat (msec) : 250=15.15%, 500=84.85% 00:27:04.867 cpu : usr=98.40%, sys=1.07%, ctx=41, majf=0, minf=62 00:27:04.867 IO depths : 1=4.7%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:27:04.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.867 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.867 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.867 filename0: (groupid=0, jobs=1): err= 0: pid=449157: Mon Jul 15 21:48:54 2024 00:27:04.867 read: IOPS=49, BW=198KiB/s (203kB/s)(1984KiB/10022msec) 00:27:04.867 slat (usec): min=7, max=112, avg=36.04, stdev=16.46 00:27:04.867 clat (msec): min=210, max=599, avg=322.98, stdev=58.22 00:27:04.867 lat (msec): min=210, max=599, avg=323.02, stdev=58.21 00:27:04.867 clat percentiles (msec): 00:27:04.867 | 1.00th=[ 224], 5.00th=[ 234], 10.00th=[ 300], 20.00th=[ 305], 00:27:04.867 | 30.00th=[ 313], 40.00th=[ 313], 50.00th=[ 317], 60.00th=[ 321], 00:27:04.867 | 70.00th=[ 326], 80.00th=[ 326], 90.00th=[ 334], 95.00th=[ 418], 00:27:04.867 | 99.00th=[ 600], 99.50th=[ 600], 99.90th=[ 600], 99.95th=[ 600], 00:27:04.867 | 99.99th=[ 600] 00:27:04.867 bw ( KiB/s): min= 128, max= 256, per=3.89%, avg=202.00, stdev=60.16, samples=19 00:27:04.867 iops : min= 32, max= 64, avg=50.42, stdev=15.05, samples=19 00:27:04.867 lat (msec) : 250=6.05%, 500=90.73%, 750=3.23% 00:27:04.867 cpu : usr=98.08%, sys=1.30%, ctx=71, majf=0, minf=41 00:27:04.867 IO depths : 1=3.6%, 2=9.9%, 4=25.0%, 8=52.6%, 16=8.9%, 32=0.0%, >=64=0.0% 00:27:04.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.867 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.867 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.867 filename0: (groupid=0, jobs=1): err= 0: pid=449158: Mon Jul 15 21:48:54 2024 00:27:04.867 read: IOPS=50, BW=202KiB/s (207kB/s)(2048KiB/10149msec) 00:27:04.867 slat (usec): min=12, max=146, avg=87.32, stdev=25.43 00:27:04.867 clat (msec): min=167, max=476, avg=316.16, stdev=41.23 00:27:04.867 lat (msec): min=167, max=476, avg=316.25, stdev=41.23 00:27:04.867 clat percentiles (msec): 00:27:04.867 | 1.00th=[ 171], 5.00th=[ 279], 10.00th=[ 284], 20.00th=[ 305], 00:27:04.867 | 30.00th=[ 309], 40.00th=[ 313], 50.00th=[ 317], 60.00th=[ 321], 00:27:04.867 | 70.00th=[ 326], 80.00th=[ 330], 90.00th=[ 342], 95.00th=[ 368], 00:27:04.867 | 99.00th=[ 460], 99.50th=[ 464], 99.90th=[ 477], 99.95th=[ 477], 00:27:04.867 | 99.99th=[ 477] 00:27:04.867 bw ( KiB/s): min= 128, max= 256, per=3.82%, avg=198.40, stdev=60.85, samples=20 00:27:04.867 iops : min= 32, max= 64, avg=49.60, stdev=15.21, samples=20 00:27:04.867 lat (msec) : 250=4.69%, 500=95.31% 00:27:04.867 cpu : usr=97.54%, sys=1.60%, ctx=148, majf=0, minf=64 00:27:04.867 IO depths : 1=3.5%, 2=9.8%, 4=25.0%, 8=52.7%, 16=9.0%, 32=0.0%, >=64=0.0% 00:27:04.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.867 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.867 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.867 filename0: (groupid=0, jobs=1): err= 0: pid=449159: Mon Jul 15 21:48:54 2024 00:27:04.867 read: IOPS=49, BW=198KiB/s (203kB/s)(1984KiB/10022msec) 00:27:04.867 slat (usec): min=7, max=137, avg=75.85, stdev=31.80 00:27:04.867 clat (msec): min=164, max=598, avg=322.64, stdev=63.45 00:27:04.867 lat (msec): min=164, max=598, avg=322.71, stdev=63.44 00:27:04.867 clat percentiles (msec): 00:27:04.867 | 1.00th=[ 182], 5.00th=[ 224], 10.00th=[ 300], 20.00th=[ 305], 00:27:04.867 | 30.00th=[ 309], 40.00th=[ 317], 50.00th=[ 317], 60.00th=[ 321], 00:27:04.867 | 70.00th=[ 326], 80.00th=[ 330], 90.00th=[ 338], 95.00th=[ 443], 00:27:04.867 | 99.00th=[ 600], 99.50th=[ 600], 99.90th=[ 600], 99.95th=[ 600], 00:27:04.867 | 99.99th=[ 600] 00:27:04.867 bw ( KiB/s): min= 128, max= 256, per=3.89%, avg=202.00, stdev=64.84, samples=19 00:27:04.867 iops : min= 32, max= 64, avg=50.42, stdev=16.14, samples=19 00:27:04.867 lat (msec) : 250=6.05%, 500=90.73%, 750=3.23% 00:27:04.867 cpu : usr=98.23%, sys=1.26%, ctx=39, majf=0, minf=56 00:27:04.867 IO depths : 1=4.6%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.9%, 32=0.0%, >=64=0.0% 00:27:04.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.867 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.867 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.867 filename0: (groupid=0, jobs=1): err= 0: pid=449160: Mon Jul 15 21:48:54 2024 00:27:04.867 read: IOPS=57, BW=231KiB/s (236kB/s)(2344KiB/10164msec) 00:27:04.867 slat (usec): min=4, max=146, avg=65.58, stdev=36.80 00:27:04.867 clat (msec): min=79, max=414, avg=276.64, stdev=66.42 00:27:04.867 lat (msec): min=79, max=414, avg=276.70, stdev=66.44 00:27:04.867 clat percentiles (msec): 00:27:04.867 | 1.00th=[ 80], 5.00th=[ 126], 10.00th=[ 186], 20.00th=[ 218], 00:27:04.867 | 30.00th=[ 245], 40.00th=[ 300], 50.00th=[ 309], 60.00th=[ 313], 00:27:04.867 | 70.00th=[ 317], 80.00th=[ 326], 90.00th=[ 330], 95.00th=[ 338], 00:27:04.867 | 99.00th=[ 388], 99.50th=[ 409], 99.90th=[ 414], 99.95th=[ 414], 00:27:04.867 | 99.99th=[ 414] 00:27:04.867 bw ( KiB/s): min= 128, max= 384, per=4.38%, avg=227.95, stdev=65.50, samples=20 00:27:04.867 iops : min= 32, max= 96, avg=56.95, stdev=16.43, samples=20 00:27:04.867 lat (msec) : 100=2.73%, 250=30.72%, 500=66.55% 00:27:04.867 cpu : usr=98.14%, sys=1.22%, ctx=73, majf=0, minf=66 00:27:04.867 IO depths : 1=3.2%, 2=8.2%, 4=21.0%, 8=58.4%, 16=9.2%, 32=0.0%, >=64=0.0% 00:27:04.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.867 complete : 0=0.0%, 4=93.0%, 8=1.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.867 issued rwts: total=586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.867 filename0: (groupid=0, jobs=1): err= 0: pid=449161: Mon Jul 15 21:48:54 2024 00:27:04.867 read: IOPS=52, BW=208KiB/s (213kB/s)(2112KiB/10150msec) 00:27:04.867 slat (usec): min=8, max=145, avg=81.38, stdev=30.94 00:27:04.867 clat (msec): min=169, max=482, avg=306.85, stdev=45.42 00:27:04.867 lat (msec): min=169, max=482, avg=306.93, stdev=45.43 00:27:04.867 clat percentiles (msec): 00:27:04.867 | 1.00th=[ 169], 5.00th=[ 209], 10.00th=[ 226], 20.00th=[ 288], 00:27:04.867 | 30.00th=[ 313], 40.00th=[ 313], 50.00th=[ 317], 60.00th=[ 321], 00:27:04.867 | 70.00th=[ 330], 80.00th=[ 334], 90.00th=[ 338], 95.00th=[ 342], 00:27:04.867 | 99.00th=[ 418], 99.50th=[ 418], 99.90th=[ 485], 99.95th=[ 485], 00:27:04.867 | 99.99th=[ 485] 00:27:04.867 bw ( KiB/s): min= 128, max= 368, per=3.93%, avg=204.80, stdev=73.89, samples=20 00:27:04.867 iops : min= 32, max= 92, avg=51.20, stdev=18.47, samples=20 00:27:04.867 lat (msec) : 250=13.64%, 500=86.36% 00:27:04.867 cpu : usr=98.37%, sys=1.11%, ctx=148, majf=0, minf=75 00:27:04.867 IO depths : 1=4.9%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:27:04.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.867 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.867 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.867 filename1: (groupid=0, jobs=1): err= 0: pid=449162: Mon Jul 15 21:48:54 2024 00:27:04.867 read: IOPS=74, BW=296KiB/s (303kB/s)(3008KiB/10155msec) 00:27:04.867 slat (usec): min=4, max=145, avg=17.28, stdev=22.61 00:27:04.867 clat (msec): min=82, max=338, avg=214.19, stdev=37.18 00:27:04.867 lat (msec): min=82, max=338, avg=214.21, stdev=37.19 00:27:04.867 clat percentiles (msec): 00:27:04.867 | 1.00th=[ 83], 5.00th=[ 163], 10.00th=[ 182], 20.00th=[ 192], 00:27:04.867 | 30.00th=[ 201], 40.00th=[ 211], 50.00th=[ 215], 60.00th=[ 220], 00:27:04.867 | 70.00th=[ 224], 80.00th=[ 234], 90.00th=[ 247], 95.00th=[ 268], 00:27:04.867 | 99.00th=[ 338], 99.50th=[ 338], 99.90th=[ 338], 99.95th=[ 338], 00:27:04.867 | 99.99th=[ 338] 00:27:04.867 bw ( KiB/s): min= 256, max= 384, per=5.67%, avg=294.35, stdev=55.21, samples=20 00:27:04.867 iops : min= 64, max= 96, avg=73.55, stdev=13.75, samples=20 00:27:04.867 lat (msec) : 100=2.13%, 250=91.22%, 500=6.65% 00:27:04.867 cpu : usr=98.21%, sys=1.28%, ctx=88, majf=0, minf=55 00:27:04.867 IO depths : 1=2.0%, 2=8.2%, 4=25.0%, 8=54.3%, 16=10.5%, 32=0.0%, >=64=0.0% 00:27:04.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.867 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.867 issued rwts: total=752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.867 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.867 filename1: (groupid=0, jobs=1): err= 0: pid=449163: Mon Jul 15 21:48:54 2024 00:27:04.867 read: IOPS=50, BW=204KiB/s (209kB/s)(2048KiB/10047msec) 00:27:04.867 slat (usec): min=12, max=145, avg=77.09, stdev=29.47 00:27:04.867 clat (msec): min=167, max=465, avg=313.34, stdev=44.50 00:27:04.867 lat (msec): min=167, max=465, avg=313.41, stdev=44.51 00:27:04.867 clat percentiles (msec): 00:27:04.867 | 1.00th=[ 180], 5.00th=[ 209], 10.00th=[ 284], 20.00th=[ 300], 00:27:04.867 | 30.00th=[ 309], 40.00th=[ 317], 50.00th=[ 317], 60.00th=[ 321], 00:27:04.867 | 70.00th=[ 321], 80.00th=[ 326], 90.00th=[ 338], 95.00th=[ 426], 00:27:04.867 | 99.00th=[ 460], 99.50th=[ 460], 99.90th=[ 464], 99.95th=[ 464], 00:27:04.867 | 99.99th=[ 464] 00:27:04.867 bw ( KiB/s): min= 128, max= 256, per=3.82%, avg=198.40, stdev=59.28, samples=20 00:27:04.867 iops : min= 32, max= 64, avg=49.60, stdev=14.82, samples=20 00:27:04.868 lat (msec) : 250=5.86%, 500=94.14% 00:27:04.868 cpu : usr=98.33%, sys=1.10%, ctx=39, majf=0, minf=74 00:27:04.868 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:27:04.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.868 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.868 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.868 filename1: (groupid=0, jobs=1): err= 0: pid=449164: Mon Jul 15 21:48:54 2024 00:27:04.868 read: IOPS=49, BW=198KiB/s (203kB/s)(1984KiB/10018msec) 00:27:04.868 slat (usec): min=15, max=148, avg=38.72, stdev=18.85 00:27:04.868 clat (msec): min=222, max=595, avg=322.84, stdev=55.10 00:27:04.868 lat (msec): min=222, max=595, avg=322.88, stdev=55.11 00:27:04.868 clat percentiles (msec): 00:27:04.868 | 1.00th=[ 234], 5.00th=[ 284], 10.00th=[ 300], 20.00th=[ 305], 00:27:04.868 | 30.00th=[ 313], 40.00th=[ 313], 50.00th=[ 317], 60.00th=[ 321], 00:27:04.868 | 70.00th=[ 326], 80.00th=[ 326], 90.00th=[ 334], 95.00th=[ 338], 00:27:04.868 | 99.00th=[ 592], 99.50th=[ 592], 99.90th=[ 592], 99.95th=[ 592], 00:27:04.868 | 99.99th=[ 592] 00:27:04.868 bw ( KiB/s): min= 127, max= 256, per=3.87%, avg=201.89, stdev=63.41, samples=19 00:27:04.868 iops : min= 31, max= 64, avg=50.37, stdev=15.93, samples=19 00:27:04.868 lat (msec) : 250=4.44%, 500=92.34%, 750=3.23% 00:27:04.868 cpu : usr=97.72%, sys=1.52%, ctx=97, majf=0, minf=42 00:27:04.868 IO depths : 1=5.0%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:27:04.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.868 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.868 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.868 filename1: (groupid=0, jobs=1): err= 0: pid=449165: Mon Jul 15 21:48:54 2024 00:27:04.868 read: IOPS=52, BW=208KiB/s (213kB/s)(2112KiB/10149msec) 00:27:04.868 slat (nsec): min=12752, max=83915, avg=27068.88, stdev=11572.69 00:27:04.868 clat (msec): min=170, max=421, avg=307.29, stdev=40.37 00:27:04.868 lat (msec): min=170, max=422, avg=307.31, stdev=40.37 00:27:04.868 clat percentiles (msec): 00:27:04.868 | 1.00th=[ 171], 5.00th=[ 211], 10.00th=[ 236], 20.00th=[ 300], 00:27:04.868 | 30.00th=[ 313], 40.00th=[ 313], 50.00th=[ 317], 60.00th=[ 321], 00:27:04.868 | 70.00th=[ 326], 80.00th=[ 330], 90.00th=[ 334], 95.00th=[ 342], 00:27:04.868 | 99.00th=[ 397], 99.50th=[ 418], 99.90th=[ 422], 99.95th=[ 422], 00:27:04.868 | 99.99th=[ 422] 00:27:04.868 bw ( KiB/s): min= 128, max= 256, per=3.93%, avg=204.80, stdev=62.85, samples=20 00:27:04.868 iops : min= 32, max= 64, avg=51.20, stdev=15.71, samples=20 00:27:04.868 lat (msec) : 250=10.98%, 500=89.02% 00:27:04.868 cpu : usr=97.70%, sys=1.50%, ctx=61, majf=0, minf=81 00:27:04.868 IO depths : 1=4.9%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:27:04.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.868 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.868 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.868 filename1: (groupid=0, jobs=1): err= 0: pid=449166: Mon Jul 15 21:48:54 2024 00:27:04.868 read: IOPS=68, BW=272KiB/s (279kB/s)(2760KiB/10139msec) 00:27:04.868 slat (usec): min=8, max=101, avg=14.61, stdev=12.03 00:27:04.868 clat (msec): min=180, max=440, avg=233.94, stdev=44.09 00:27:04.868 lat (msec): min=180, max=440, avg=233.95, stdev=44.09 00:27:04.868 clat percentiles (msec): 00:27:04.868 | 1.00th=[ 182], 5.00th=[ 194], 10.00th=[ 199], 20.00th=[ 203], 00:27:04.868 | 30.00th=[ 209], 40.00th=[ 215], 50.00th=[ 220], 60.00th=[ 222], 00:27:04.868 | 70.00th=[ 226], 80.00th=[ 275], 90.00th=[ 317], 95.00th=[ 330], 00:27:04.868 | 99.00th=[ 342], 99.50th=[ 380], 99.90th=[ 443], 99.95th=[ 443], 00:27:04.868 | 99.99th=[ 443] 00:27:04.868 bw ( KiB/s): min= 128, max= 368, per=5.19%, avg=269.60, stdev=57.40, samples=20 00:27:04.868 iops : min= 32, max= 92, avg=67.40, stdev=14.35, samples=20 00:27:04.868 lat (msec) : 250=78.84%, 500=21.16% 00:27:04.868 cpu : usr=98.33%, sys=1.23%, ctx=25, majf=0, minf=88 00:27:04.868 IO depths : 1=1.4%, 2=3.9%, 4=13.5%, 8=70.0%, 16=11.2%, 32=0.0%, >=64=0.0% 00:27:04.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.868 complete : 0=0.0%, 4=90.8%, 8=3.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.868 issued rwts: total=690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.868 filename1: (groupid=0, jobs=1): err= 0: pid=449167: Mon Jul 15 21:48:54 2024 00:27:04.868 read: IOPS=49, BW=196KiB/s (201kB/s)(1984KiB/10121msec) 00:27:04.868 slat (usec): min=16, max=137, avg=85.66, stdev=27.77 00:27:04.868 clat (msec): min=145, max=596, avg=325.50, stdev=63.96 00:27:04.868 lat (msec): min=145, max=597, avg=325.58, stdev=63.96 00:27:04.868 clat percentiles (msec): 00:27:04.868 | 1.00th=[ 167], 5.00th=[ 226], 10.00th=[ 288], 20.00th=[ 313], 00:27:04.868 | 30.00th=[ 313], 40.00th=[ 317], 50.00th=[ 321], 60.00th=[ 326], 00:27:04.868 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 363], 95.00th=[ 409], 00:27:04.868 | 99.00th=[ 600], 99.50th=[ 600], 99.90th=[ 600], 99.95th=[ 600], 00:27:04.868 | 99.99th=[ 600] 00:27:04.868 bw ( KiB/s): min= 127, max= 256, per=3.87%, avg=201.89, stdev=63.32, samples=19 00:27:04.868 iops : min= 31, max= 64, avg=50.37, stdev=15.83, samples=19 00:27:04.868 lat (msec) : 250=5.65%, 500=91.13%, 750=3.23% 00:27:04.868 cpu : usr=98.38%, sys=1.18%, ctx=25, majf=0, minf=43 00:27:04.868 IO depths : 1=4.4%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.1%, 32=0.0%, >=64=0.0% 00:27:04.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.868 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.868 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.868 filename1: (groupid=0, jobs=1): err= 0: pid=449168: Mon Jul 15 21:48:54 2024 00:27:04.868 read: IOPS=49, BW=196KiB/s (201kB/s)(1984KiB/10115msec) 00:27:04.868 slat (usec): min=17, max=142, avg=70.03, stdev=32.70 00:27:04.868 clat (msec): min=164, max=599, avg=325.67, stdev=54.43 00:27:04.868 lat (msec): min=164, max=599, avg=325.74, stdev=54.43 00:27:04.868 clat percentiles (msec): 00:27:04.868 | 1.00th=[ 180], 5.00th=[ 279], 10.00th=[ 300], 20.00th=[ 309], 00:27:04.868 | 30.00th=[ 313], 40.00th=[ 317], 50.00th=[ 321], 60.00th=[ 321], 00:27:04.868 | 70.00th=[ 326], 80.00th=[ 330], 90.00th=[ 338], 95.00th=[ 447], 00:27:04.868 | 99.00th=[ 535], 99.50th=[ 535], 99.90th=[ 600], 99.95th=[ 600], 00:27:04.868 | 99.99th=[ 600] 00:27:04.868 bw ( KiB/s): min= 128, max= 256, per=3.89%, avg=202.00, stdev=64.84, samples=19 00:27:04.868 iops : min= 32, max= 64, avg=50.42, stdev=16.14, samples=19 00:27:04.868 lat (msec) : 250=3.63%, 500=93.15%, 750=3.23% 00:27:04.868 cpu : usr=98.33%, sys=1.27%, ctx=70, majf=0, minf=59 00:27:04.868 IO depths : 1=4.0%, 2=10.3%, 4=25.0%, 8=52.2%, 16=8.5%, 32=0.0%, >=64=0.0% 00:27:04.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.868 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.868 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.868 filename1: (groupid=0, jobs=1): err= 0: pid=449169: Mon Jul 15 21:48:54 2024 00:27:04.868 read: IOPS=50, BW=204KiB/s (209kB/s)(2048KiB/10048msec) 00:27:04.868 slat (usec): min=9, max=144, avg=71.85, stdev=35.35 00:27:04.868 clat (msec): min=189, max=439, avg=313.41, stdev=33.36 00:27:04.868 lat (msec): min=189, max=439, avg=313.48, stdev=33.37 00:27:04.868 clat percentiles (msec): 00:27:04.868 | 1.00th=[ 215], 5.00th=[ 234], 10.00th=[ 284], 20.00th=[ 300], 00:27:04.868 | 30.00th=[ 313], 40.00th=[ 313], 50.00th=[ 317], 60.00th=[ 321], 00:27:04.868 | 70.00th=[ 326], 80.00th=[ 330], 90.00th=[ 334], 95.00th=[ 342], 00:27:04.868 | 99.00th=[ 426], 99.50th=[ 430], 99.90th=[ 439], 99.95th=[ 439], 00:27:04.868 | 99.99th=[ 439] 00:27:04.868 bw ( KiB/s): min= 128, max= 256, per=3.82%, avg=198.40, stdev=62.38, samples=20 00:27:04.868 iops : min= 32, max= 64, avg=49.60, stdev=15.59, samples=20 00:27:04.868 lat (msec) : 250=6.25%, 500=93.75% 00:27:04.868 cpu : usr=98.15%, sys=1.26%, ctx=101, majf=0, minf=69 00:27:04.868 IO depths : 1=4.1%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.4%, 32=0.0%, >=64=0.0% 00:27:04.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.868 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.868 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.868 filename2: (groupid=0, jobs=1): err= 0: pid=449170: Mon Jul 15 21:48:54 2024 00:27:04.868 read: IOPS=81, BW=327KiB/s (335kB/s)(3320KiB/10157msec) 00:27:04.868 slat (usec): min=4, max=133, avg=12.01, stdev= 6.39 00:27:04.868 clat (msec): min=2, max=326, avg=194.22, stdev=68.73 00:27:04.868 lat (msec): min=2, max=326, avg=194.23, stdev=68.73 00:27:04.868 clat percentiles (msec): 00:27:04.868 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 72], 20.00th=[ 188], 00:27:04.868 | 30.00th=[ 203], 40.00th=[ 207], 50.00th=[ 211], 60.00th=[ 215], 00:27:04.868 | 70.00th=[ 222], 80.00th=[ 228], 90.00th=[ 249], 95.00th=[ 279], 00:27:04.868 | 99.00th=[ 321], 99.50th=[ 326], 99.90th=[ 326], 99.95th=[ 326], 00:27:04.868 | 99.99th=[ 326] 00:27:04.868 bw ( KiB/s): min= 224, max= 896, per=6.26%, avg=325.55, stdev=139.25, samples=20 00:27:04.868 iops : min= 56, max= 224, avg=81.35, stdev=34.83, samples=20 00:27:04.868 lat (msec) : 4=3.86%, 10=3.86%, 100=3.61%, 250=79.28%, 500=9.40% 00:27:04.868 cpu : usr=98.47%, sys=1.15%, ctx=24, majf=0, minf=149 00:27:04.868 IO depths : 1=0.5%, 2=2.7%, 4=12.4%, 8=72.3%, 16=12.2%, 32=0.0%, >=64=0.0% 00:27:04.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.868 complete : 0=0.0%, 4=90.6%, 8=4.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.868 issued rwts: total=830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.868 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.868 filename2: (groupid=0, jobs=1): err= 0: pid=449171: Mon Jul 15 21:48:54 2024 00:27:04.868 read: IOPS=50, BW=202KiB/s (207kB/s)(2048KiB/10135msec) 00:27:04.868 slat (usec): min=5, max=142, avg=82.51, stdev=34.05 00:27:04.868 clat (msec): min=140, max=545, avg=316.01, stdev=68.58 00:27:04.868 lat (msec): min=140, max=545, avg=316.09, stdev=68.57 00:27:04.868 clat percentiles (msec): 00:27:04.868 | 1.00th=[ 142], 5.00th=[ 169], 10.00th=[ 228], 20.00th=[ 300], 00:27:04.868 | 30.00th=[ 313], 40.00th=[ 317], 50.00th=[ 317], 60.00th=[ 321], 00:27:04.868 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 368], 95.00th=[ 422], 00:27:04.868 | 99.00th=[ 550], 99.50th=[ 550], 99.90th=[ 550], 99.95th=[ 550], 00:27:04.868 | 99.99th=[ 550] 00:27:04.868 bw ( KiB/s): min= 128, max= 384, per=4.01%, avg=208.84, stdev=73.80, samples=19 00:27:04.868 iops : min= 32, max= 96, avg=52.21, stdev=18.45, samples=19 00:27:04.868 lat (msec) : 250=14.84%, 500=82.03%, 750=3.12% 00:27:04.868 cpu : usr=98.48%, sys=1.13%, ctx=18, majf=0, minf=69 00:27:04.868 IO depths : 1=3.5%, 2=9.8%, 4=25.0%, 8=52.7%, 16=9.0%, 32=0.0%, >=64=0.0% 00:27:04.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.869 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.869 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.869 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.869 filename2: (groupid=0, jobs=1): err= 0: pid=449172: Mon Jul 15 21:48:54 2024 00:27:04.869 read: IOPS=49, BW=196KiB/s (201kB/s)(1984KiB/10117msec) 00:27:04.869 slat (usec): min=8, max=111, avg=23.99, stdev=15.44 00:27:04.869 clat (msec): min=161, max=538, avg=326.13, stdev=55.66 00:27:04.869 lat (msec): min=161, max=538, avg=326.15, stdev=55.66 00:27:04.869 clat percentiles (msec): 00:27:04.869 | 1.00th=[ 171], 5.00th=[ 284], 10.00th=[ 300], 20.00th=[ 309], 00:27:04.869 | 30.00th=[ 313], 40.00th=[ 317], 50.00th=[ 317], 60.00th=[ 321], 00:27:04.869 | 70.00th=[ 326], 80.00th=[ 330], 90.00th=[ 368], 95.00th=[ 451], 00:27:04.869 | 99.00th=[ 542], 99.50th=[ 542], 99.90th=[ 542], 99.95th=[ 542], 00:27:04.869 | 99.99th=[ 542] 00:27:04.869 bw ( KiB/s): min= 128, max= 256, per=3.89%, avg=202.00, stdev=61.72, samples=19 00:27:04.869 iops : min= 32, max= 64, avg=50.42, stdev=15.38, samples=19 00:27:04.869 lat (msec) : 250=4.03%, 500=92.74%, 750=3.23% 00:27:04.869 cpu : usr=98.56%, sys=1.07%, ctx=20, majf=0, minf=56 00:27:04.869 IO depths : 1=4.2%, 2=10.5%, 4=25.0%, 8=52.0%, 16=8.3%, 32=0.0%, >=64=0.0% 00:27:04.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.869 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.869 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.869 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.869 filename2: (groupid=0, jobs=1): err= 0: pid=449173: Mon Jul 15 21:48:54 2024 00:27:04.869 read: IOPS=50, BW=202KiB/s (207kB/s)(2048KiB/10128msec) 00:27:04.869 slat (usec): min=4, max=132, avg=72.26, stdev=38.30 00:27:04.869 clat (msec): min=141, max=549, avg=313.39, stdev=62.33 00:27:04.869 lat (msec): min=141, max=549, avg=313.46, stdev=62.33 00:27:04.869 clat percentiles (msec): 00:27:04.869 | 1.00th=[ 142], 5.00th=[ 218], 10.00th=[ 245], 20.00th=[ 300], 00:27:04.869 | 30.00th=[ 309], 40.00th=[ 313], 50.00th=[ 317], 60.00th=[ 321], 00:27:04.869 | 70.00th=[ 326], 80.00th=[ 330], 90.00th=[ 334], 95.00th=[ 414], 00:27:04.869 | 99.00th=[ 550], 99.50th=[ 550], 99.90th=[ 550], 99.95th=[ 550], 00:27:04.869 | 99.99th=[ 550] 00:27:04.869 bw ( KiB/s): min= 128, max= 384, per=4.01%, avg=208.84, stdev=73.80, samples=19 00:27:04.869 iops : min= 32, max= 96, avg=52.21, stdev=18.45, samples=19 00:27:04.869 lat (msec) : 250=12.11%, 500=84.77%, 750=3.12% 00:27:04.869 cpu : usr=98.58%, sys=1.03%, ctx=15, majf=0, minf=52 00:27:04.869 IO depths : 1=3.9%, 2=10.2%, 4=25.0%, 8=52.3%, 16=8.6%, 32=0.0%, >=64=0.0% 00:27:04.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.869 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.869 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.869 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.869 filename2: (groupid=0, jobs=1): err= 0: pid=449174: Mon Jul 15 21:48:54 2024 00:27:04.869 read: IOPS=48, BW=195KiB/s (200kB/s)(1976KiB/10130msec) 00:27:04.869 slat (nsec): min=4843, max=61072, avg=20170.14, stdev=10399.84 00:27:04.869 clat (msec): min=170, max=605, avg=327.48, stdev=61.12 00:27:04.869 lat (msec): min=170, max=605, avg=327.50, stdev=61.12 00:27:04.869 clat percentiles (msec): 00:27:04.869 | 1.00th=[ 171], 5.00th=[ 288], 10.00th=[ 300], 20.00th=[ 313], 00:27:04.869 | 30.00th=[ 317], 40.00th=[ 317], 50.00th=[ 321], 60.00th=[ 321], 00:27:04.869 | 70.00th=[ 330], 80.00th=[ 334], 90.00th=[ 342], 95.00th=[ 363], 00:27:04.869 | 99.00th=[ 609], 99.50th=[ 609], 99.90th=[ 609], 99.95th=[ 609], 00:27:04.869 | 99.99th=[ 609] 00:27:04.869 bw ( KiB/s): min= 127, max= 256, per=3.87%, avg=201.21, stdev=64.36, samples=19 00:27:04.869 iops : min= 31, max= 64, avg=50.26, stdev=16.14, samples=19 00:27:04.869 lat (msec) : 250=4.05%, 500=92.71%, 750=3.24% 00:27:04.869 cpu : usr=98.42%, sys=1.17%, ctx=18, majf=0, minf=67 00:27:04.869 IO depths : 1=5.1%, 2=11.3%, 4=25.1%, 8=51.2%, 16=7.3%, 32=0.0%, >=64=0.0% 00:27:04.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.869 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.869 issued rwts: total=494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.869 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.869 filename2: (groupid=0, jobs=1): err= 0: pid=449175: Mon Jul 15 21:48:54 2024 00:27:04.869 read: IOPS=49, BW=198KiB/s (203kB/s)(1984KiB/10031msec) 00:27:04.869 slat (nsec): min=4912, max=63938, avg=23084.51, stdev=10802.74 00:27:04.869 clat (msec): min=140, max=543, avg=323.36, stdev=58.42 00:27:04.869 lat (msec): min=140, max=543, avg=323.38, stdev=58.42 00:27:04.869 clat percentiles (msec): 00:27:04.869 | 1.00th=[ 142], 5.00th=[ 245], 10.00th=[ 300], 20.00th=[ 309], 00:27:04.869 | 30.00th=[ 313], 40.00th=[ 317], 50.00th=[ 317], 60.00th=[ 326], 00:27:04.869 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 338], 95.00th=[ 409], 00:27:04.869 | 99.00th=[ 542], 99.50th=[ 542], 99.90th=[ 542], 99.95th=[ 542], 00:27:04.869 | 99.99th=[ 542] 00:27:04.869 bw ( KiB/s): min= 128, max= 256, per=3.89%, avg=202.11, stdev=64.93, samples=19 00:27:04.869 iops : min= 32, max= 64, avg=50.53, stdev=16.23, samples=19 00:27:04.869 lat (msec) : 250=5.24%, 500=91.53%, 750=3.23% 00:27:04.869 cpu : usr=98.77%, sys=0.85%, ctx=16, majf=0, minf=63 00:27:04.869 IO depths : 1=4.6%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.9%, 32=0.0%, >=64=0.0% 00:27:04.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.869 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.869 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.869 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.869 filename2: (groupid=0, jobs=1): err= 0: pid=449176: Mon Jul 15 21:48:54 2024 00:27:04.869 read: IOPS=50, BW=204KiB/s (209kB/s)(2048KiB/10048msec) 00:27:04.869 slat (usec): min=16, max=143, avg=69.56, stdev=34.20 00:27:04.869 clat (msec): min=191, max=440, avg=313.45, stdev=32.95 00:27:04.869 lat (msec): min=191, max=440, avg=313.52, stdev=32.96 00:27:04.869 clat percentiles (msec): 00:27:04.869 | 1.00th=[ 211], 5.00th=[ 224], 10.00th=[ 284], 20.00th=[ 300], 00:27:04.869 | 30.00th=[ 313], 40.00th=[ 313], 50.00th=[ 317], 60.00th=[ 321], 00:27:04.869 | 70.00th=[ 326], 80.00th=[ 330], 90.00th=[ 334], 95.00th=[ 342], 00:27:04.869 | 99.00th=[ 418], 99.50th=[ 422], 99.90th=[ 439], 99.95th=[ 439], 00:27:04.869 | 99.99th=[ 439] 00:27:04.869 bw ( KiB/s): min= 128, max= 256, per=3.82%, avg=198.40, stdev=59.28, samples=20 00:27:04.869 iops : min= 32, max= 64, avg=49.60, stdev=14.82, samples=20 00:27:04.869 lat (msec) : 250=6.25%, 500=93.75% 00:27:04.869 cpu : usr=98.63%, sys=0.98%, ctx=22, majf=0, minf=43 00:27:04.869 IO depths : 1=2.7%, 2=9.0%, 4=25.0%, 8=53.5%, 16=9.8%, 32=0.0%, >=64=0.0% 00:27:04.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.869 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.869 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.869 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.869 filename2: (groupid=0, jobs=1): err= 0: pid=449177: Mon Jul 15 21:48:54 2024 00:27:04.869 read: IOPS=49, BW=196KiB/s (201kB/s)(1984KiB/10121msec) 00:27:04.869 slat (usec): min=8, max=134, avg=86.08, stdev=24.29 00:27:04.869 clat (msec): min=168, max=672, avg=325.70, stdev=68.02 00:27:04.869 lat (msec): min=168, max=672, avg=325.79, stdev=68.01 00:27:04.869 clat percentiles (msec): 00:27:04.869 | 1.00th=[ 169], 5.00th=[ 222], 10.00th=[ 284], 20.00th=[ 309], 00:27:04.869 | 30.00th=[ 313], 40.00th=[ 317], 50.00th=[ 321], 60.00th=[ 326], 00:27:04.869 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 363], 95.00th=[ 422], 00:27:04.869 | 99.00th=[ 592], 99.50th=[ 592], 99.90th=[ 676], 99.95th=[ 676], 00:27:04.869 | 99.99th=[ 676] 00:27:04.869 bw ( KiB/s): min= 127, max= 256, per=3.87%, avg=201.95, stdev=58.58, samples=19 00:27:04.869 iops : min= 31, max= 64, avg=50.37, stdev=14.73, samples=19 00:27:04.869 lat (msec) : 250=9.27%, 500=87.50%, 750=3.23% 00:27:04.869 cpu : usr=98.66%, sys=0.95%, ctx=20, majf=0, minf=76 00:27:04.869 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:27:04.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.869 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:04.869 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:04.869 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:04.869 00:27:04.869 Run status group 0 (all jobs): 00:27:04.869 READ: bw=5188KiB/s (5312kB/s), 195KiB/s-327KiB/s (200kB/s-335kB/s), io=51.5MiB (54.0MB), run=10018-10167msec 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.869 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.870 bdev_null0 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.870 [2024-07-15 21:48:54.430042] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.870 bdev_null1 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.870 { 00:27:04.870 "params": { 00:27:04.870 "name": "Nvme$subsystem", 00:27:04.870 "trtype": "$TEST_TRANSPORT", 00:27:04.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.870 "adrfam": "ipv4", 00:27:04.870 "trsvcid": "$NVMF_PORT", 00:27:04.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.870 "hdgst": ${hdgst:-false}, 00:27:04.870 "ddgst": ${ddgst:-false} 00:27:04.870 }, 00:27:04.870 "method": "bdev_nvme_attach_controller" 00:27:04.870 } 00:27:04.870 EOF 00:27:04.870 )") 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.870 { 00:27:04.870 "params": { 00:27:04.870 "name": "Nvme$subsystem", 00:27:04.870 "trtype": "$TEST_TRANSPORT", 00:27:04.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.870 "adrfam": "ipv4", 00:27:04.870 "trsvcid": "$NVMF_PORT", 00:27:04.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.870 "hdgst": ${hdgst:-false}, 00:27:04.870 "ddgst": ${ddgst:-false} 00:27:04.870 }, 00:27:04.870 "method": "bdev_nvme_attach_controller" 00:27:04.870 } 00:27:04.870 EOF 00:27:04.870 )") 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:04.870 "params": { 00:27:04.870 "name": "Nvme0", 00:27:04.870 "trtype": "tcp", 00:27:04.870 "traddr": "10.0.0.2", 00:27:04.870 "adrfam": "ipv4", 00:27:04.870 "trsvcid": "4420", 00:27:04.870 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:04.870 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:04.870 "hdgst": false, 00:27:04.870 "ddgst": false 00:27:04.870 }, 00:27:04.870 "method": "bdev_nvme_attach_controller" 00:27:04.870 },{ 00:27:04.870 "params": { 00:27:04.870 "name": "Nvme1", 00:27:04.870 "trtype": "tcp", 00:27:04.870 "traddr": "10.0.0.2", 00:27:04.870 "adrfam": "ipv4", 00:27:04.870 "trsvcid": "4420", 00:27:04.870 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:04.870 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:04.870 "hdgst": false, 00:27:04.870 "ddgst": false 00:27:04.870 }, 00:27:04.870 "method": "bdev_nvme_attach_controller" 00:27:04.870 }' 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:04.870 21:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:04.870 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:04.870 ... 00:27:04.870 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:04.870 ... 00:27:04.871 fio-3.35 00:27:04.871 Starting 4 threads 00:27:04.871 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.135 00:27:10.135 filename0: (groupid=0, jobs=1): err= 0: pid=450220: Mon Jul 15 21:49:00 2024 00:27:10.135 read: IOPS=1888, BW=14.8MiB/s (15.5MB/s)(73.8MiB/5002msec) 00:27:10.135 slat (nsec): min=8443, max=54640, avg=18760.30, stdev=8801.07 00:27:10.135 clat (usec): min=1050, max=7567, avg=4174.19, stdev=360.25 00:27:10.135 lat (usec): min=1069, max=7588, avg=4192.95, stdev=360.09 00:27:10.135 clat percentiles (usec): 00:27:10.135 | 1.00th=[ 3392], 5.00th=[ 3851], 10.00th=[ 3982], 20.00th=[ 4047], 00:27:10.135 | 30.00th=[ 4113], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228], 00:27:10.135 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4359], 95.00th=[ 4424], 00:27:10.135 | 99.00th=[ 5866], 99.50th=[ 6849], 99.90th=[ 7373], 99.95th=[ 7373], 00:27:10.135 | 99.99th=[ 7570] 00:27:10.135 bw ( KiB/s): min=14640, max=15488, per=24.92%, avg=15102.20, stdev=248.24, samples=10 00:27:10.135 iops : min= 1830, max= 1936, avg=1887.70, stdev=31.06, samples=10 00:27:10.135 lat (msec) : 2=0.13%, 4=13.54%, 10=86.33% 00:27:10.135 cpu : usr=96.14%, sys=3.44%, ctx=16, majf=0, minf=0 00:27:10.135 IO depths : 1=0.7%, 2=15.5%, 4=57.4%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:10.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:10.135 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:10.135 issued rwts: total=9445,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:10.135 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:10.135 filename0: (groupid=0, jobs=1): err= 0: pid=450221: Mon Jul 15 21:49:00 2024 00:27:10.135 read: IOPS=1892, BW=14.8MiB/s (15.5MB/s)(73.9MiB/5001msec) 00:27:10.135 slat (nsec): min=7632, max=66053, avg=21650.78, stdev=11158.03 00:27:10.135 clat (usec): min=875, max=7520, avg=4139.56, stdev=415.20 00:27:10.135 lat (usec): min=890, max=7536, avg=4161.21, stdev=415.69 00:27:10.135 clat percentiles (usec): 00:27:10.135 | 1.00th=[ 2507], 5.00th=[ 3818], 10.00th=[ 3982], 20.00th=[ 4015], 00:27:10.135 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4178], 00:27:10.135 | 70.00th=[ 4228], 80.00th=[ 4228], 90.00th=[ 4293], 95.00th=[ 4359], 00:27:10.135 | 99.00th=[ 6063], 99.50th=[ 6456], 99.90th=[ 7177], 99.95th=[ 7308], 00:27:10.135 | 99.99th=[ 7504] 00:27:10.135 bw ( KiB/s): min=14800, max=15248, per=24.97%, avg=15132.22, stdev=150.47, samples=9 00:27:10.135 iops : min= 1850, max= 1906, avg=1891.44, stdev=18.89, samples=9 00:27:10.135 lat (usec) : 1000=0.03% 00:27:10.135 lat (msec) : 2=0.63%, 4=13.90%, 10=85.43% 00:27:10.135 cpu : usr=95.78%, sys=3.80%, ctx=9, majf=0, minf=9 00:27:10.135 IO depths : 1=0.6%, 2=21.8%, 4=52.6%, 8=24.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:10.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:10.135 complete : 0=0.0%, 4=90.4%, 8=9.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:10.135 issued rwts: total=9465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:10.135 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:10.135 filename1: (groupid=0, jobs=1): err= 0: pid=450222: Mon Jul 15 21:49:00 2024 00:27:10.135 read: IOPS=1905, BW=14.9MiB/s (15.6MB/s)(74.5MiB/5003msec) 00:27:10.135 slat (nsec): min=7546, max=63796, avg=21041.80, stdev=10954.81 00:27:10.135 clat (usec): min=944, max=7437, avg=4113.14, stdev=274.50 00:27:10.135 lat (usec): min=969, max=7458, avg=4134.18, stdev=276.00 00:27:10.135 clat percentiles (usec): 00:27:10.135 | 1.00th=[ 3326], 5.00th=[ 3752], 10.00th=[ 3949], 20.00th=[ 4015], 00:27:10.135 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4146], 00:27:10.135 | 70.00th=[ 4228], 80.00th=[ 4228], 90.00th=[ 4293], 95.00th=[ 4359], 00:27:10.135 | 99.00th=[ 4686], 99.50th=[ 5080], 99.90th=[ 6652], 99.95th=[ 6915], 00:27:10.135 | 99.99th=[ 7439] 00:27:10.135 bw ( KiB/s): min=14976, max=15616, per=25.16%, avg=15243.20, stdev=187.05, samples=10 00:27:10.135 iops : min= 1872, max= 1952, avg=1905.40, stdev=23.38, samples=10 00:27:10.135 lat (usec) : 1000=0.02% 00:27:10.135 lat (msec) : 2=0.19%, 4=16.21%, 10=83.58% 00:27:10.135 cpu : usr=95.68%, sys=3.92%, ctx=7, majf=0, minf=0 00:27:10.135 IO depths : 1=1.0%, 2=22.7%, 4=51.8%, 8=24.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:10.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:10.135 complete : 0=0.0%, 4=90.3%, 8=9.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:10.135 issued rwts: total=9535,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:10.135 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:10.135 filename1: (groupid=0, jobs=1): err= 0: pid=450223: Mon Jul 15 21:49:00 2024 00:27:10.135 read: IOPS=1888, BW=14.8MiB/s (15.5MB/s)(73.8MiB/5002msec) 00:27:10.135 slat (nsec): min=7579, max=66151, avg=21473.99, stdev=11161.96 00:27:10.135 clat (usec): min=800, max=8621, avg=4146.61, stdev=495.20 00:27:10.135 lat (usec): min=812, max=8646, avg=4168.08, stdev=495.56 00:27:10.135 clat percentiles (usec): 00:27:10.135 | 1.00th=[ 2073], 5.00th=[ 3818], 10.00th=[ 3982], 20.00th=[ 4047], 00:27:10.135 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4178], 00:27:10.135 | 70.00th=[ 4228], 80.00th=[ 4228], 90.00th=[ 4293], 95.00th=[ 4359], 00:27:10.135 | 99.00th=[ 6521], 99.50th=[ 6849], 99.90th=[ 7373], 99.95th=[ 7439], 00:27:10.135 | 99.99th=[ 8586] 00:27:10.135 bw ( KiB/s): min=14864, max=15232, per=24.92%, avg=15098.67, stdev=125.73, samples=9 00:27:10.135 iops : min= 1858, max= 1904, avg=1887.33, stdev=15.72, samples=9 00:27:10.135 lat (usec) : 1000=0.18% 00:27:10.135 lat (msec) : 2=0.77%, 4=12.79%, 10=86.26% 00:27:10.135 cpu : usr=95.64%, sys=3.96%, ctx=7, majf=0, minf=0 00:27:10.135 IO depths : 1=0.6%, 2=22.7%, 4=51.8%, 8=24.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:10.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:10.135 complete : 0=0.0%, 4=90.4%, 8=9.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:10.135 issued rwts: total=9448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:10.135 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:10.135 00:27:10.135 Run status group 0 (all jobs): 00:27:10.135 READ: bw=59.2MiB/s (62.0MB/s), 14.8MiB/s-14.9MiB/s (15.5MB/s-15.6MB/s), io=296MiB (310MB), run=5001-5003msec 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.135 00:27:10.135 real 0m24.217s 00:27:10.135 user 4m35.955s 00:27:10.135 sys 0m5.276s 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:10.135 21:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:10.135 ************************************ 00:27:10.135 END TEST fio_dif_rand_params 00:27:10.135 ************************************ 00:27:10.135 21:49:00 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:10.135 21:49:00 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:10.135 21:49:00 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:10.135 21:49:00 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:10.135 21:49:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:10.135 ************************************ 00:27:10.135 START TEST fio_dif_digest 00:27:10.135 ************************************ 00:27:10.135 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:27:10.135 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:10.136 bdev_null0 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:10.136 [2024-07-15 21:49:00.756902] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:10.136 { 00:27:10.136 "params": { 00:27:10.136 "name": "Nvme$subsystem", 00:27:10.136 "trtype": "$TEST_TRANSPORT", 00:27:10.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:10.136 "adrfam": "ipv4", 00:27:10.136 "trsvcid": "$NVMF_PORT", 00:27:10.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:10.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:10.136 "hdgst": ${hdgst:-false}, 00:27:10.136 "ddgst": ${ddgst:-false} 00:27:10.136 }, 00:27:10.136 "method": "bdev_nvme_attach_controller" 00:27:10.136 } 00:27:10.136 EOF 00:27:10.136 )") 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:10.136 "params": { 00:27:10.136 "name": "Nvme0", 00:27:10.136 "trtype": "tcp", 00:27:10.136 "traddr": "10.0.0.2", 00:27:10.136 "adrfam": "ipv4", 00:27:10.136 "trsvcid": "4420", 00:27:10.136 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:10.136 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:10.136 "hdgst": true, 00:27:10.136 "ddgst": true 00:27:10.136 }, 00:27:10.136 "method": "bdev_nvme_attach_controller" 00:27:10.136 }' 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:10.136 21:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:10.394 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:10.394 ... 00:27:10.394 fio-3.35 00:27:10.394 Starting 3 threads 00:27:10.394 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.625 00:27:22.625 filename0: (groupid=0, jobs=1): err= 0: pid=450799: Mon Jul 15 21:49:11 2024 00:27:22.625 read: IOPS=201, BW=25.2MiB/s (26.5MB/s)(254MiB/10045msec) 00:27:22.625 slat (nsec): min=5696, max=61272, avg=14297.54, stdev=3574.70 00:27:22.625 clat (usec): min=8904, max=56012, avg=14811.92, stdev=2287.21 00:27:22.625 lat (usec): min=8915, max=56025, avg=14826.21, stdev=2287.26 00:27:22.625 clat percentiles (usec): 00:27:22.625 | 1.00th=[10290], 5.00th=[13173], 10.00th=[13566], 20.00th=[13960], 00:27:22.625 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[14877], 00:27:22.625 | 70.00th=[15139], 80.00th=[15533], 90.00th=[15926], 95.00th=[16319], 00:27:22.625 | 99.00th=[18220], 99.50th=[20841], 99.90th=[55313], 99.95th=[55313], 00:27:22.625 | 99.99th=[55837] 00:27:22.625 bw ( KiB/s): min=23808, max=28416, per=32.37%, avg=25948.10, stdev=817.32, samples=20 00:27:22.625 iops : min= 186, max= 222, avg=202.70, stdev= 6.40, samples=20 00:27:22.625 lat (msec) : 10=0.74%, 20=98.47%, 50=0.59%, 100=0.20% 00:27:22.625 cpu : usr=95.15%, sys=4.41%, ctx=30, majf=0, minf=168 00:27:22.625 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:22.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.625 issued rwts: total=2029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.625 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:22.625 filename0: (groupid=0, jobs=1): err= 0: pid=450800: Mon Jul 15 21:49:11 2024 00:27:22.625 read: IOPS=203, BW=25.5MiB/s (26.7MB/s)(255MiB/10003msec) 00:27:22.625 slat (nsec): min=4751, max=36555, avg=14503.36, stdev=3334.53 00:27:22.625 clat (usec): min=8824, max=55024, avg=14691.03, stdev=2848.49 00:27:22.625 lat (usec): min=8841, max=55036, avg=14705.53, stdev=2848.29 00:27:22.625 clat percentiles (usec): 00:27:22.625 | 1.00th=[12387], 5.00th=[13042], 10.00th=[13435], 20.00th=[13829], 00:27:22.625 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14484], 60.00th=[14746], 00:27:22.625 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15664], 95.00th=[15926], 00:27:22.625 | 99.00th=[19006], 99.50th=[21627], 99.90th=[54789], 99.95th=[54789], 00:27:22.626 | 99.99th=[54789] 00:27:22.626 bw ( KiB/s): min=20736, max=26880, per=32.56%, avg=26101.80, stdev=1306.68, samples=20 00:27:22.626 iops : min= 162, max= 210, avg=203.90, stdev=10.21, samples=20 00:27:22.626 lat (msec) : 10=0.34%, 20=98.68%, 50=0.54%, 100=0.44% 00:27:22.626 cpu : usr=94.10%, sys=5.39%, ctx=38, majf=0, minf=196 00:27:22.626 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:22.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.626 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.626 issued rwts: total=2040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.626 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:22.626 filename0: (groupid=0, jobs=1): err= 0: pid=450801: Mon Jul 15 21:49:11 2024 00:27:22.626 read: IOPS=221, BW=27.6MiB/s (29.0MB/s)(278MiB/10046msec) 00:27:22.626 slat (nsec): min=5616, max=40784, avg=20097.66, stdev=4795.50 00:27:22.626 clat (usec): min=7736, max=51281, avg=13520.61, stdev=1551.05 00:27:22.626 lat (usec): min=7748, max=51295, avg=13540.71, stdev=1551.35 00:27:22.626 clat percentiles (usec): 00:27:22.626 | 1.00th=[ 9110], 5.00th=[11863], 10.00th=[12387], 20.00th=[12911], 00:27:22.626 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13566], 60.00th=[13698], 00:27:22.626 | 70.00th=[13960], 80.00th=[14222], 90.00th=[14484], 95.00th=[14877], 00:27:22.626 | 99.00th=[15926], 99.50th=[18220], 99.90th=[21365], 99.95th=[46400], 00:27:22.626 | 99.99th=[51119] 00:27:22.626 bw ( KiB/s): min=27648, max=32000, per=35.43%, avg=28403.20, stdev=919.18, samples=20 00:27:22.626 iops : min= 216, max= 250, avg=221.90, stdev= 7.18, samples=20 00:27:22.626 lat (msec) : 10=2.03%, 20=97.70%, 50=0.23%, 100=0.05% 00:27:22.626 cpu : usr=94.99%, sys=4.57%, ctx=27, majf=0, minf=113 00:27:22.626 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:22.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.626 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.626 issued rwts: total=2222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.626 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:22.626 00:27:22.626 Run status group 0 (all jobs): 00:27:22.626 READ: bw=78.3MiB/s (82.1MB/s), 25.2MiB/s-27.6MiB/s (26.5MB/s-29.0MB/s), io=786MiB (825MB), run=10003-10046msec 00:27:22.626 21:49:11 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:27:22.626 21:49:11 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:27:22.626 21:49:11 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:27:22.626 21:49:11 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:22.626 21:49:11 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:27:22.626 21:49:11 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:22.626 21:49:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.626 21:49:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:22.626 21:49:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.626 21:49:11 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:22.626 21:49:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.627 21:49:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:22.627 21:49:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.627 00:27:22.627 real 0m11.044s 00:27:22.627 user 0m29.393s 00:27:22.627 sys 0m1.660s 00:27:22.627 21:49:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:22.627 21:49:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:22.627 ************************************ 00:27:22.627 END TEST fio_dif_digest 00:27:22.627 ************************************ 00:27:22.627 21:49:11 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:22.627 21:49:11 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:27:22.627 21:49:11 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:27:22.627 21:49:11 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:22.627 21:49:11 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:27:22.627 21:49:11 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:22.627 21:49:11 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:27:22.627 21:49:11 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:22.627 21:49:11 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:22.627 rmmod nvme_tcp 00:27:22.627 rmmod nvme_fabrics 00:27:22.627 rmmod nvme_keyring 00:27:22.627 21:49:11 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:22.627 21:49:11 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:27:22.627 21:49:11 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:27:22.627 21:49:11 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 446077 ']' 00:27:22.627 21:49:11 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 446077 00:27:22.627 21:49:11 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 446077 ']' 00:27:22.627 21:49:11 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 446077 00:27:22.627 21:49:11 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:27:22.627 21:49:11 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:22.627 21:49:11 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 446077 00:27:22.627 21:49:11 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:22.627 21:49:11 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:22.627 21:49:11 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 446077' 00:27:22.627 killing process with pid 446077 00:27:22.627 21:49:11 nvmf_dif -- common/autotest_common.sh@967 -- # kill 446077 00:27:22.627 21:49:11 nvmf_dif -- common/autotest_common.sh@972 -- # wait 446077 00:27:22.627 21:49:12 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:22.627 21:49:12 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:22.627 Waiting for block devices as requested 00:27:22.627 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:27:22.628 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:27:22.628 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:27:22.628 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:27:22.628 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:27:22.628 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:27:22.890 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:27:22.890 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:27:22.890 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:27:23.149 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:27:23.149 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:27:23.149 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:27:23.149 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:27:23.410 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:27:23.410 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:27:23.410 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:27:23.410 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:27:23.669 21:49:14 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:23.669 21:49:14 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:23.669 21:49:14 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:23.669 21:49:14 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:23.669 21:49:14 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.669 21:49:14 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:23.669 21:49:14 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.572 21:49:16 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:25.572 00:27:25.572 real 1m5.690s 00:27:25.572 user 6m31.815s 00:27:25.572 sys 0m15.649s 00:27:25.572 21:49:16 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:25.572 21:49:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:25.572 ************************************ 00:27:25.572 END TEST nvmf_dif 00:27:25.572 ************************************ 00:27:25.572 21:49:16 -- common/autotest_common.sh@1142 -- # return 0 00:27:25.572 21:49:16 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:25.572 21:49:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:25.572 21:49:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:25.572 21:49:16 -- common/autotest_common.sh@10 -- # set +x 00:27:25.572 ************************************ 00:27:25.572 START TEST nvmf_abort_qd_sizes 00:27:25.572 ************************************ 00:27:25.572 21:49:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:25.572 * Looking for test storage... 00:27:25.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:25.830 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:25.831 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:25.831 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:25.831 21:49:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:27:25.831 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:25.831 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:25.831 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:25.831 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:25.831 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:25.831 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.831 21:49:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:25.831 21:49:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.831 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:25.831 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:25.831 21:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:27:25.831 21:49:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:27.209 21:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:27.209 21:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:27:27.209 21:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:27.209 21:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:27.209 21:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:27.209 21:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:27.209 21:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:27.209 21:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:27:27.209 21:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:27.209 21:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:27:27.209 21:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:27:27.209 21:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:27:27.209 21:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:27:27.209 21:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:27:27.209 21:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:27:27.209 21:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:27.209 21:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:27.209 21:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:27.209 21:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:27.209 21:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:27.209 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:27.209 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:27.209 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:27:27.469 Found 0000:08:00.0 (0x8086 - 0x159b) 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:27:27.469 Found 0000:08:00.1 (0x8086 - 0x159b) 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:27:27.469 Found net devices under 0000:08:00.0: cvl_0_0 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:27:27.469 Found net devices under 0000:08:00.1: cvl_0_1 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:27.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:27.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:27:27.469 00:27:27.469 --- 10.0.0.2 ping statistics --- 00:27:27.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.469 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:27.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:27.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:27:27.469 00:27:27.469 --- 10.0.0.1 ping statistics --- 00:27:27.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.469 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:27.469 21:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:28.405 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:27:28.405 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:27:28.663 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:27:28.663 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:27:28.663 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:27:28.663 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:27:28.663 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:27:28.663 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:27:28.663 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:27:28.663 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:27:28.663 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:27:28.663 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:27:28.663 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:27:28.663 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:27:28.663 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:27:28.663 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:27:29.600 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:27:29.600 21:49:20 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:29.600 21:49:20 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:29.600 21:49:20 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:29.600 21:49:20 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:29.600 21:49:20 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:29.600 21:49:20 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:29.600 21:49:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:27:29.601 21:49:20 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:29.601 21:49:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:29.601 21:49:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:29.601 21:49:20 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=454525 00:27:29.601 21:49:20 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:29.601 21:49:20 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 454525 00:27:29.601 21:49:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 454525 ']' 00:27:29.601 21:49:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.601 21:49:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:29.601 21:49:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.601 21:49:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:29.601 21:49:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:29.601 [2024-07-15 21:49:20.327922] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:27:29.601 [2024-07-15 21:49:20.328024] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.601 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.601 [2024-07-15 21:49:20.393987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:29.859 [2024-07-15 21:49:20.515227] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:29.859 [2024-07-15 21:49:20.515284] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:29.859 [2024-07-15 21:49:20.515300] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:29.859 [2024-07-15 21:49:20.515314] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:29.859 [2024-07-15 21:49:20.515325] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:29.859 [2024-07-15 21:49:20.515412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.859 [2024-07-15 21:49:20.515468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:29.859 [2024-07-15 21:49:20.515516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:29.859 [2024-07-15 21:49:20.515519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.859 21:49:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:29.859 21:49:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:27:29.859 21:49:20 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:29.859 21:49:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:29.859 21:49:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:29.859 21:49:20 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:29.859 21:49:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:29.859 21:49:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:27:29.859 21:49:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:27:29.859 21:49:20 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:27:29.859 21:49:20 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:27:29.859 21:49:20 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:84:00.0 ]] 00:27:29.859 21:49:20 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:27:29.859 21:49:20 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:27:29.859 21:49:20 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:84:00.0 ]] 00:27:29.859 21:49:20 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:27:30.118 21:49:20 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:27:30.118 21:49:20 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:27:30.118 21:49:20 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:27:30.118 21:49:20 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:84:00.0 00:27:30.118 21:49:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:27:30.118 21:49:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:84:00.0 00:27:30.118 21:49:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:27:30.118 21:49:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:30.118 21:49:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:30.118 21:49:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:30.118 ************************************ 00:27:30.118 START TEST spdk_target_abort 00:27:30.118 ************************************ 00:27:30.118 21:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:27:30.118 21:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:30.118 21:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:84:00.0 -b spdk_target 00:27:30.118 21:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.118 21:49:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.392 spdk_targetn1 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.392 [2024-07-15 21:49:23.508427] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.392 [2024-07-15 21:49:23.540634] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:33.392 21:49:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:33.392 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.914 Initializing NVMe Controllers 00:27:35.914 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:35.914 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:35.914 Initialization complete. Launching workers. 00:27:35.914 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12663, failed: 0 00:27:35.914 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1233, failed to submit 11430 00:27:35.914 success 682, unsuccess 551, failed 0 00:27:35.914 21:49:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:35.914 21:49:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:35.914 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.181 Initializing NVMe Controllers 00:27:39.181 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:39.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:39.181 Initialization complete. Launching workers. 00:27:39.181 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8383, failed: 0 00:27:39.181 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1233, failed to submit 7150 00:27:39.181 success 305, unsuccess 928, failed 0 00:27:39.181 21:49:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:39.181 21:49:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:39.181 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.451 Initializing NVMe Controllers 00:27:42.451 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:42.451 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:42.451 Initialization complete. Launching workers. 00:27:42.451 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32107, failed: 0 00:27:42.451 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2710, failed to submit 29397 00:27:42.451 success 496, unsuccess 2214, failed 0 00:27:42.451 21:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:42.451 21:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.451 21:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:42.451 21:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.451 21:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:42.451 21:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.451 21:49:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:43.823 21:49:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.823 21:49:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 454525 00:27:43.823 21:49:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 454525 ']' 00:27:43.823 21:49:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 454525 00:27:43.823 21:49:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:27:43.823 21:49:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:43.823 21:49:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 454525 00:27:43.823 21:49:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:43.823 21:49:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:43.823 21:49:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 454525' 00:27:43.823 killing process with pid 454525 00:27:43.823 21:49:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 454525 00:27:43.823 21:49:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 454525 00:27:44.082 00:27:44.082 real 0m13.971s 00:27:44.082 user 0m52.931s 00:27:44.082 sys 0m2.464s 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.082 ************************************ 00:27:44.082 END TEST spdk_target_abort 00:27:44.082 ************************************ 00:27:44.082 21:49:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:44.082 21:49:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:44.082 21:49:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:44.082 21:49:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:44.082 21:49:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:44.082 ************************************ 00:27:44.082 START TEST kernel_target_abort 00:27:44.082 ************************************ 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:44.082 21:49:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:45.022 Waiting for block devices as requested 00:27:45.022 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:27:45.284 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:27:45.284 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:27:45.284 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:27:45.284 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:27:45.544 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:27:45.544 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:27:45.544 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:27:45.801 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:27:45.801 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:27:45.801 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:27:45.801 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:27:46.061 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:27:46.061 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:27:46.061 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:27:46.061 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:27:46.324 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:27:46.324 21:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:46.324 21:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:46.324 21:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:46.324 21:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:46.324 21:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:46.324 21:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:46.324 21:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:46.324 21:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:46.324 21:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:46.324 No valid GPT data, bailing 00:27:46.324 21:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:46.324 21:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:46.324 21:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:46.324 21:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:46.324 21:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:46.324 21:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:46.324 21:49:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:27:46.324 00:27:46.324 Discovery Log Number of Records 2, Generation counter 2 00:27:46.324 =====Discovery Log Entry 0====== 00:27:46.324 trtype: tcp 00:27:46.324 adrfam: ipv4 00:27:46.324 subtype: current discovery subsystem 00:27:46.324 treq: not specified, sq flow control disable supported 00:27:46.324 portid: 1 00:27:46.324 trsvcid: 4420 00:27:46.324 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:46.324 traddr: 10.0.0.1 00:27:46.324 eflags: none 00:27:46.324 sectype: none 00:27:46.324 =====Discovery Log Entry 1====== 00:27:46.324 trtype: tcp 00:27:46.324 adrfam: ipv4 00:27:46.324 subtype: nvme subsystem 00:27:46.324 treq: not specified, sq flow control disable supported 00:27:46.324 portid: 1 00:27:46.324 trsvcid: 4420 00:27:46.324 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:46.324 traddr: 10.0.0.1 00:27:46.324 eflags: none 00:27:46.324 sectype: none 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:46.324 21:49:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:46.639 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.943 Initializing NVMe Controllers 00:27:49.943 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:49.943 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:49.943 Initialization complete. Launching workers. 00:27:49.943 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 54151, failed: 0 00:27:49.943 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 54151, failed to submit 0 00:27:49.943 success 0, unsuccess 54151, failed 0 00:27:49.943 21:49:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:49.943 21:49:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:49.943 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.223 Initializing NVMe Controllers 00:27:53.223 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:53.223 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:53.223 Initialization complete. Launching workers. 00:27:53.223 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 100959, failed: 0 00:27:53.223 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25442, failed to submit 75517 00:27:53.223 success 0, unsuccess 25442, failed 0 00:27:53.223 21:49:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:53.223 21:49:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:53.223 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.751 Initializing NVMe Controllers 00:27:55.751 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:55.751 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:55.751 Initialization complete. Launching workers. 00:27:55.751 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 96123, failed: 0 00:27:55.751 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24050, failed to submit 72073 00:27:55.751 success 0, unsuccess 24050, failed 0 00:27:55.751 21:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:55.751 21:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:55.751 21:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:55.751 21:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:55.751 21:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:55.751 21:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:55.752 21:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:55.752 21:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:55.752 21:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:55.752 21:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:56.688 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:27:56.688 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:27:56.688 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:27:56.688 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:27:56.688 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:27:56.947 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:27:56.947 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:27:56.947 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:27:56.947 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:27:56.947 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:27:56.947 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:27:56.947 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:27:56.947 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:27:56.947 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:27:56.947 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:27:56.947 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:27:57.914 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:27:57.914 00:27:57.914 real 0m13.823s 00:27:57.914 user 0m6.779s 00:27:57.914 sys 0m2.868s 00:27:57.914 21:49:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:57.914 21:49:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:57.914 ************************************ 00:27:57.914 END TEST kernel_target_abort 00:27:57.914 ************************************ 00:27:57.914 21:49:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:57.914 21:49:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:57.914 21:49:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:57.914 21:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:57.914 21:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:57.914 21:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:57.914 21:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:57.914 21:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:57.914 21:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:57.914 rmmod nvme_tcp 00:27:57.914 rmmod nvme_fabrics 00:27:57.914 rmmod nvme_keyring 00:27:57.914 21:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:57.914 21:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:57.914 21:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:57.914 21:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 454525 ']' 00:27:57.914 21:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 454525 00:27:57.914 21:49:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 454525 ']' 00:27:57.914 21:49:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 454525 00:27:57.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (454525) - No such process 00:27:57.914 21:49:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 454525 is not found' 00:27:57.914 Process with pid 454525 is not found 00:27:57.914 21:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:57.914 21:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:58.849 Waiting for block devices as requested 00:27:58.849 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:27:59.108 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:27:59.108 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:27:59.108 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:27:59.368 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:27:59.368 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:27:59.368 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:27:59.368 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:27:59.627 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:27:59.627 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:27:59.627 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:27:59.885 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:27:59.885 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:27:59.885 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:28:00.143 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:28:00.143 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:28:00.143 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:28:00.143 21:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:00.143 21:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:00.143 21:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:00.143 21:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:00.143 21:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.143 21:49:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:00.143 21:49:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.678 21:49:52 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:02.678 00:28:02.678 real 0m36.626s 00:28:02.678 user 1m1.648s 00:28:02.678 sys 0m8.357s 00:28:02.678 21:49:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:02.678 21:49:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:02.678 ************************************ 00:28:02.678 END TEST nvmf_abort_qd_sizes 00:28:02.678 ************************************ 00:28:02.678 21:49:52 -- common/autotest_common.sh@1142 -- # return 0 00:28:02.678 21:49:52 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:28:02.678 21:49:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:02.678 21:49:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:02.678 21:49:52 -- common/autotest_common.sh@10 -- # set +x 00:28:02.678 ************************************ 00:28:02.678 START TEST keyring_file 00:28:02.678 ************************************ 00:28:02.678 21:49:52 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:28:02.678 * Looking for test storage... 00:28:02.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:28:02.678 21:49:53 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:28:02.678 21:49:53 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:02.678 21:49:53 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:02.678 21:49:53 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:02.678 21:49:53 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:02.678 21:49:53 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.678 21:49:53 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.678 21:49:53 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.678 21:49:53 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:02.678 21:49:53 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@47 -- # : 0 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:02.678 21:49:53 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:02.678 21:49:53 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:02.678 21:49:53 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:02.678 21:49:53 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:02.678 21:49:53 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:02.678 21:49:53 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:02.678 21:49:53 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:02.678 21:49:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:02.678 21:49:53 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:02.678 21:49:53 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:02.678 21:49:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:02.678 21:49:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:02.678 21:49:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.7H6bA81tJo 00:28:02.678 21:49:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:02.678 21:49:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.7H6bA81tJo 00:28:02.678 21:49:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.7H6bA81tJo 00:28:02.678 21:49:53 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.7H6bA81tJo 00:28:02.678 21:49:53 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:02.678 21:49:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:02.678 21:49:53 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:02.678 21:49:53 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:02.678 21:49:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:02.678 21:49:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:02.678 21:49:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yTzLnobEOF 00:28:02.678 21:49:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:02.678 21:49:53 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:02.678 21:49:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yTzLnobEOF 00:28:02.678 21:49:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yTzLnobEOF 00:28:02.678 21:49:53 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.yTzLnobEOF 00:28:02.678 21:49:53 keyring_file -- keyring/file.sh@30 -- # tgtpid=459029 00:28:02.678 21:49:53 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:02.678 21:49:53 keyring_file -- keyring/file.sh@32 -- # waitforlisten 459029 00:28:02.678 21:49:53 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 459029 ']' 00:28:02.678 21:49:53 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.678 21:49:53 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:02.678 21:49:53 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.679 21:49:53 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:02.679 21:49:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:02.679 [2024-07-15 21:49:53.220696] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:28:02.679 [2024-07-15 21:49:53.220805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid459029 ] 00:28:02.679 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.679 [2024-07-15 21:49:53.294550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.679 [2024-07-15 21:49:53.450856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.936 21:49:53 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:02.936 21:49:53 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:02.936 21:49:53 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:02.936 21:49:53 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.936 21:49:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:02.936 [2024-07-15 21:49:53.708422] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:02.936 null0 00:28:03.195 [2024-07-15 21:49:53.740461] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:03.195 [2024-07-15 21:49:53.740835] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:03.195 [2024-07-15 21:49:53.748453] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:03.195 21:49:53 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.195 21:49:53 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:03.195 21:49:53 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:03.195 21:49:53 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:03.195 21:49:53 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:03.195 21:49:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:03.195 21:49:53 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:03.195 21:49:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:03.195 21:49:53 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:03.195 21:49:53 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.195 21:49:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:03.195 [2024-07-15 21:49:53.760480] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:03.195 request: 00:28:03.195 { 00:28:03.195 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:03.195 "secure_channel": false, 00:28:03.195 "listen_address": { 00:28:03.195 "trtype": "tcp", 00:28:03.195 "traddr": "127.0.0.1", 00:28:03.195 "trsvcid": "4420" 00:28:03.195 }, 00:28:03.195 "method": "nvmf_subsystem_add_listener", 00:28:03.195 "req_id": 1 00:28:03.195 } 00:28:03.195 Got JSON-RPC error response 00:28:03.195 response: 00:28:03.195 { 00:28:03.195 "code": -32602, 00:28:03.195 "message": "Invalid parameters" 00:28:03.195 } 00:28:03.195 21:49:53 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:03.195 21:49:53 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:03.195 21:49:53 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:03.195 21:49:53 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:03.195 21:49:53 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:03.195 21:49:53 keyring_file -- keyring/file.sh@46 -- # bperfpid=459039 00:28:03.195 21:49:53 keyring_file -- keyring/file.sh@48 -- # waitforlisten 459039 /var/tmp/bperf.sock 00:28:03.195 21:49:53 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:03.195 21:49:53 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 459039 ']' 00:28:03.195 21:49:53 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:03.195 21:49:53 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:03.195 21:49:53 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:03.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:03.195 21:49:53 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:03.195 21:49:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:03.195 [2024-07-15 21:49:53.814164] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:28:03.195 [2024-07-15 21:49:53.814264] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid459039 ] 00:28:03.195 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.195 [2024-07-15 21:49:53.870949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.195 [2024-07-15 21:49:53.967923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.453 21:49:54 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:03.453 21:49:54 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:03.453 21:49:54 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7H6bA81tJo 00:28:03.453 21:49:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7H6bA81tJo 00:28:03.710 21:49:54 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yTzLnobEOF 00:28:03.710 21:49:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yTzLnobEOF 00:28:03.967 21:49:54 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:28:03.967 21:49:54 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:28:03.967 21:49:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:03.967 21:49:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:03.967 21:49:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:04.224 21:49:54 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.7H6bA81tJo == \/\t\m\p\/\t\m\p\.\7\H\6\b\A\8\1\t\J\o ]] 00:28:04.224 21:49:54 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:28:04.224 21:49:54 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:04.224 21:49:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:04.224 21:49:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:04.224 21:49:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:04.481 21:49:55 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.yTzLnobEOF == \/\t\m\p\/\t\m\p\.\y\T\z\L\n\o\b\E\O\F ]] 00:28:04.481 21:49:55 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:28:04.481 21:49:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:04.481 21:49:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:04.481 21:49:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:04.481 21:49:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:04.481 21:49:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:04.738 21:49:55 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:28:04.738 21:49:55 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:28:04.738 21:49:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:04.738 21:49:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:04.738 21:49:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:04.738 21:49:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:04.738 21:49:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:04.995 21:49:55 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:04.995 21:49:55 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:04.995 21:49:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:05.252 [2024-07-15 21:49:55.955671] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:05.252 nvme0n1 00:28:05.252 21:49:56 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:28:05.508 21:49:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:05.508 21:49:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:05.508 21:49:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:05.508 21:49:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:05.508 21:49:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:05.508 21:49:56 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:28:05.508 21:49:56 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:28:05.508 21:49:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:05.508 21:49:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:05.509 21:49:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:05.509 21:49:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:05.509 21:49:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:05.786 21:49:56 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:28:05.786 21:49:56 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:06.043 Running I/O for 1 seconds... 00:28:06.973 00:28:06.973 Latency(us) 00:28:06.973 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.973 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:28:06.973 nvme0n1 : 1.01 10832.21 42.31 0.00 0.00 11769.49 7039.05 23010.42 00:28:06.973 =================================================================================================================== 00:28:06.973 Total : 10832.21 42.31 0.00 0.00 11769.49 7039.05 23010.42 00:28:06.973 0 00:28:06.973 21:49:57 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:06.973 21:49:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:07.230 21:49:57 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:28:07.230 21:49:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:07.230 21:49:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:07.230 21:49:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:07.230 21:49:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:07.230 21:49:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:07.488 21:49:58 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:28:07.488 21:49:58 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:28:07.488 21:49:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:07.488 21:49:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:07.488 21:49:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:07.488 21:49:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:07.488 21:49:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:07.745 21:49:58 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:28:07.745 21:49:58 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:07.745 21:49:58 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:07.745 21:49:58 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:07.745 21:49:58 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:07.745 21:49:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:07.745 21:49:58 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:07.745 21:49:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:07.745 21:49:58 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:07.745 21:49:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:08.002 [2024-07-15 21:49:58.657949] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:08.002 [2024-07-15 21:49:58.657983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c4520 (107): Transport endpoint is not connected 00:28:08.002 [2024-07-15 21:49:58.658974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c4520 (9): Bad file descriptor 00:28:08.002 [2024-07-15 21:49:58.659974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:08.002 [2024-07-15 21:49:58.659999] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:08.002 [2024-07-15 21:49:58.660012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:08.002 request: 00:28:08.002 { 00:28:08.002 "name": "nvme0", 00:28:08.002 "trtype": "tcp", 00:28:08.002 "traddr": "127.0.0.1", 00:28:08.002 "adrfam": "ipv4", 00:28:08.002 "trsvcid": "4420", 00:28:08.002 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:08.002 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:08.002 "prchk_reftag": false, 00:28:08.002 "prchk_guard": false, 00:28:08.002 "hdgst": false, 00:28:08.002 "ddgst": false, 00:28:08.002 "psk": "key1", 00:28:08.002 "method": "bdev_nvme_attach_controller", 00:28:08.002 "req_id": 1 00:28:08.002 } 00:28:08.002 Got JSON-RPC error response 00:28:08.002 response: 00:28:08.002 { 00:28:08.002 "code": -5, 00:28:08.002 "message": "Input/output error" 00:28:08.002 } 00:28:08.002 21:49:58 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:08.002 21:49:58 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:08.002 21:49:58 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:08.002 21:49:58 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:08.002 21:49:58 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:28:08.002 21:49:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:08.002 21:49:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:08.002 21:49:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:08.002 21:49:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:08.002 21:49:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:08.283 21:49:58 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:28:08.283 21:49:58 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:28:08.283 21:49:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:08.283 21:49:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:08.283 21:49:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:08.283 21:49:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:08.283 21:49:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:08.545 21:49:59 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:28:08.545 21:49:59 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:28:08.545 21:49:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:08.849 21:49:59 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:28:08.849 21:49:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:28:09.106 21:49:59 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:28:09.106 21:49:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:09.106 21:49:59 keyring_file -- keyring/file.sh@77 -- # jq length 00:28:09.106 21:49:59 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:28:09.106 21:49:59 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.7H6bA81tJo 00:28:09.106 21:49:59 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.7H6bA81tJo 00:28:09.106 21:49:59 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:09.106 21:49:59 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.7H6bA81tJo 00:28:09.106 21:49:59 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:09.106 21:49:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:09.106 21:49:59 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:09.106 21:49:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:09.106 21:49:59 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7H6bA81tJo 00:28:09.107 21:49:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7H6bA81tJo 00:28:09.364 [2024-07-15 21:50:00.108957] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.7H6bA81tJo': 0100660 00:28:09.364 [2024-07-15 21:50:00.108995] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:09.364 request: 00:28:09.364 { 00:28:09.364 "name": "key0", 00:28:09.364 "path": "/tmp/tmp.7H6bA81tJo", 00:28:09.364 "method": "keyring_file_add_key", 00:28:09.364 "req_id": 1 00:28:09.364 } 00:28:09.364 Got JSON-RPC error response 00:28:09.364 response: 00:28:09.364 { 00:28:09.364 "code": -1, 00:28:09.364 "message": "Operation not permitted" 00:28:09.364 } 00:28:09.364 21:50:00 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:09.364 21:50:00 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:09.364 21:50:00 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:09.364 21:50:00 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:09.364 21:50:00 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.7H6bA81tJo 00:28:09.364 21:50:00 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7H6bA81tJo 00:28:09.364 21:50:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7H6bA81tJo 00:28:09.621 21:50:00 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.7H6bA81tJo 00:28:09.621 21:50:00 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:28:09.621 21:50:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:09.621 21:50:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:09.621 21:50:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:09.621 21:50:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:09.621 21:50:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:09.879 21:50:00 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:28:09.879 21:50:00 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:09.879 21:50:00 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:09.879 21:50:00 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:09.879 21:50:00 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:09.879 21:50:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:09.879 21:50:00 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:09.879 21:50:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:09.879 21:50:00 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:09.879 21:50:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:10.136 [2024-07-15 21:50:00.846906] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.7H6bA81tJo': No such file or directory 00:28:10.136 [2024-07-15 21:50:00.846939] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:28:10.136 [2024-07-15 21:50:00.846975] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:28:10.136 [2024-07-15 21:50:00.846986] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:10.136 [2024-07-15 21:50:00.846997] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:28:10.136 request: 00:28:10.136 { 00:28:10.136 "name": "nvme0", 00:28:10.136 "trtype": "tcp", 00:28:10.136 "traddr": "127.0.0.1", 00:28:10.136 "adrfam": "ipv4", 00:28:10.136 "trsvcid": "4420", 00:28:10.136 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:10.136 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:10.136 "prchk_reftag": false, 00:28:10.136 "prchk_guard": false, 00:28:10.136 "hdgst": false, 00:28:10.136 "ddgst": false, 00:28:10.136 "psk": "key0", 00:28:10.136 "method": "bdev_nvme_attach_controller", 00:28:10.136 "req_id": 1 00:28:10.136 } 00:28:10.136 Got JSON-RPC error response 00:28:10.136 response: 00:28:10.136 { 00:28:10.136 "code": -19, 00:28:10.136 "message": "No such device" 00:28:10.136 } 00:28:10.136 21:50:00 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:10.136 21:50:00 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:10.136 21:50:00 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:10.136 21:50:00 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:10.136 21:50:00 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:28:10.136 21:50:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:10.393 21:50:01 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:10.393 21:50:01 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:10.393 21:50:01 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:10.393 21:50:01 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:10.393 21:50:01 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:10.393 21:50:01 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:10.393 21:50:01 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.TPTBPieW4n 00:28:10.393 21:50:01 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:10.393 21:50:01 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:10.393 21:50:01 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:10.393 21:50:01 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:10.393 21:50:01 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:10.393 21:50:01 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:10.393 21:50:01 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:10.393 21:50:01 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TPTBPieW4n 00:28:10.393 21:50:01 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.TPTBPieW4n 00:28:10.393 21:50:01 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.TPTBPieW4n 00:28:10.393 21:50:01 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TPTBPieW4n 00:28:10.393 21:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TPTBPieW4n 00:28:10.651 21:50:01 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:10.651 21:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:10.908 nvme0n1 00:28:10.908 21:50:01 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:28:10.908 21:50:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:10.908 21:50:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:10.908 21:50:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:10.908 21:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:10.908 21:50:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:11.165 21:50:01 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:28:11.165 21:50:01 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:28:11.165 21:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:11.422 21:50:02 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:28:11.422 21:50:02 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:28:11.422 21:50:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:11.422 21:50:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:11.422 21:50:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:11.679 21:50:02 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:28:11.679 21:50:02 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:28:11.679 21:50:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:11.679 21:50:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:11.679 21:50:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:11.680 21:50:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:11.680 21:50:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:11.937 21:50:02 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:28:11.937 21:50:02 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:11.937 21:50:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:12.194 21:50:02 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:28:12.194 21:50:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:12.194 21:50:02 keyring_file -- keyring/file.sh@104 -- # jq length 00:28:12.452 21:50:03 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:28:12.452 21:50:03 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TPTBPieW4n 00:28:12.452 21:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TPTBPieW4n 00:28:12.709 21:50:03 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.yTzLnobEOF 00:28:12.709 21:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.yTzLnobEOF 00:28:12.966 21:50:03 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:12.966 21:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:13.223 nvme0n1 00:28:13.223 21:50:03 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:28:13.223 21:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:28:13.789 21:50:04 keyring_file -- keyring/file.sh@112 -- # config='{ 00:28:13.790 "subsystems": [ 00:28:13.790 { 00:28:13.790 "subsystem": "keyring", 00:28:13.790 "config": [ 00:28:13.790 { 00:28:13.790 "method": "keyring_file_add_key", 00:28:13.790 "params": { 00:28:13.790 "name": "key0", 00:28:13.790 "path": "/tmp/tmp.TPTBPieW4n" 00:28:13.790 } 00:28:13.790 }, 00:28:13.790 { 00:28:13.790 "method": "keyring_file_add_key", 00:28:13.790 "params": { 00:28:13.790 "name": "key1", 00:28:13.790 "path": "/tmp/tmp.yTzLnobEOF" 00:28:13.790 } 00:28:13.790 } 00:28:13.790 ] 00:28:13.790 }, 00:28:13.790 { 00:28:13.790 "subsystem": "iobuf", 00:28:13.790 "config": [ 00:28:13.790 { 00:28:13.790 "method": "iobuf_set_options", 00:28:13.790 "params": { 00:28:13.790 "small_pool_count": 8192, 00:28:13.790 "large_pool_count": 1024, 00:28:13.790 "small_bufsize": 8192, 00:28:13.790 "large_bufsize": 135168 00:28:13.790 } 00:28:13.790 } 00:28:13.790 ] 00:28:13.790 }, 00:28:13.790 { 00:28:13.790 "subsystem": "sock", 00:28:13.790 "config": [ 00:28:13.790 { 00:28:13.790 "method": "sock_set_default_impl", 00:28:13.790 "params": { 00:28:13.790 "impl_name": "posix" 00:28:13.790 } 00:28:13.790 }, 00:28:13.790 { 00:28:13.790 "method": "sock_impl_set_options", 00:28:13.790 "params": { 00:28:13.790 "impl_name": "ssl", 00:28:13.790 "recv_buf_size": 4096, 00:28:13.790 "send_buf_size": 4096, 00:28:13.790 "enable_recv_pipe": true, 00:28:13.790 "enable_quickack": false, 00:28:13.790 "enable_placement_id": 0, 00:28:13.790 "enable_zerocopy_send_server": true, 00:28:13.790 "enable_zerocopy_send_client": false, 00:28:13.790 "zerocopy_threshold": 0, 00:28:13.790 "tls_version": 0, 00:28:13.790 "enable_ktls": false 00:28:13.790 } 00:28:13.790 }, 00:28:13.790 { 00:28:13.790 "method": "sock_impl_set_options", 00:28:13.790 "params": { 00:28:13.790 "impl_name": "posix", 00:28:13.790 "recv_buf_size": 2097152, 00:28:13.790 "send_buf_size": 2097152, 00:28:13.790 "enable_recv_pipe": true, 00:28:13.790 "enable_quickack": false, 00:28:13.790 "enable_placement_id": 0, 00:28:13.790 "enable_zerocopy_send_server": true, 00:28:13.790 "enable_zerocopy_send_client": false, 00:28:13.790 "zerocopy_threshold": 0, 00:28:13.790 "tls_version": 0, 00:28:13.790 "enable_ktls": false 00:28:13.790 } 00:28:13.790 } 00:28:13.790 ] 00:28:13.790 }, 00:28:13.790 { 00:28:13.790 "subsystem": "vmd", 00:28:13.790 "config": [] 00:28:13.790 }, 00:28:13.790 { 00:28:13.790 "subsystem": "accel", 00:28:13.790 "config": [ 00:28:13.790 { 00:28:13.790 "method": "accel_set_options", 00:28:13.790 "params": { 00:28:13.790 "small_cache_size": 128, 00:28:13.790 "large_cache_size": 16, 00:28:13.790 "task_count": 2048, 00:28:13.790 "sequence_count": 2048, 00:28:13.790 "buf_count": 2048 00:28:13.790 } 00:28:13.790 } 00:28:13.790 ] 00:28:13.790 }, 00:28:13.790 { 00:28:13.790 "subsystem": "bdev", 00:28:13.790 "config": [ 00:28:13.790 { 00:28:13.790 "method": "bdev_set_options", 00:28:13.790 "params": { 00:28:13.790 "bdev_io_pool_size": 65535, 00:28:13.790 "bdev_io_cache_size": 256, 00:28:13.790 "bdev_auto_examine": true, 00:28:13.790 "iobuf_small_cache_size": 128, 00:28:13.790 "iobuf_large_cache_size": 16 00:28:13.790 } 00:28:13.790 }, 00:28:13.790 { 00:28:13.790 "method": "bdev_raid_set_options", 00:28:13.790 "params": { 00:28:13.790 "process_window_size_kb": 1024 00:28:13.790 } 00:28:13.790 }, 00:28:13.790 { 00:28:13.790 "method": "bdev_iscsi_set_options", 00:28:13.790 "params": { 00:28:13.790 "timeout_sec": 30 00:28:13.790 } 00:28:13.790 }, 00:28:13.790 { 00:28:13.790 "method": "bdev_nvme_set_options", 00:28:13.790 "params": { 00:28:13.790 "action_on_timeout": "none", 00:28:13.790 "timeout_us": 0, 00:28:13.790 "timeout_admin_us": 0, 00:28:13.790 "keep_alive_timeout_ms": 10000, 00:28:13.790 "arbitration_burst": 0, 00:28:13.790 "low_priority_weight": 0, 00:28:13.790 "medium_priority_weight": 0, 00:28:13.790 "high_priority_weight": 0, 00:28:13.790 "nvme_adminq_poll_period_us": 10000, 00:28:13.790 "nvme_ioq_poll_period_us": 0, 00:28:13.790 "io_queue_requests": 512, 00:28:13.790 "delay_cmd_submit": true, 00:28:13.790 "transport_retry_count": 4, 00:28:13.790 "bdev_retry_count": 3, 00:28:13.790 "transport_ack_timeout": 0, 00:28:13.790 "ctrlr_loss_timeout_sec": 0, 00:28:13.790 "reconnect_delay_sec": 0, 00:28:13.790 "fast_io_fail_timeout_sec": 0, 00:28:13.790 "disable_auto_failback": false, 00:28:13.790 "generate_uuids": false, 00:28:13.790 "transport_tos": 0, 00:28:13.790 "nvme_error_stat": false, 00:28:13.790 "rdma_srq_size": 0, 00:28:13.790 "io_path_stat": false, 00:28:13.790 "allow_accel_sequence": false, 00:28:13.790 "rdma_max_cq_size": 0, 00:28:13.790 "rdma_cm_event_timeout_ms": 0, 00:28:13.790 "dhchap_digests": [ 00:28:13.790 "sha256", 00:28:13.790 "sha384", 00:28:13.790 "sha512" 00:28:13.790 ], 00:28:13.790 "dhchap_dhgroups": [ 00:28:13.790 "null", 00:28:13.790 "ffdhe2048", 00:28:13.790 "ffdhe3072", 00:28:13.790 "ffdhe4096", 00:28:13.790 "ffdhe6144", 00:28:13.790 "ffdhe8192" 00:28:13.790 ] 00:28:13.790 } 00:28:13.790 }, 00:28:13.790 { 00:28:13.790 "method": "bdev_nvme_attach_controller", 00:28:13.790 "params": { 00:28:13.790 "name": "nvme0", 00:28:13.790 "trtype": "TCP", 00:28:13.790 "adrfam": "IPv4", 00:28:13.790 "traddr": "127.0.0.1", 00:28:13.790 "trsvcid": "4420", 00:28:13.790 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:13.790 "prchk_reftag": false, 00:28:13.790 "prchk_guard": false, 00:28:13.790 "ctrlr_loss_timeout_sec": 0, 00:28:13.790 "reconnect_delay_sec": 0, 00:28:13.790 "fast_io_fail_timeout_sec": 0, 00:28:13.790 "psk": "key0", 00:28:13.790 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:13.790 "hdgst": false, 00:28:13.790 "ddgst": false 00:28:13.790 } 00:28:13.790 }, 00:28:13.790 { 00:28:13.790 "method": "bdev_nvme_set_hotplug", 00:28:13.790 "params": { 00:28:13.790 "period_us": 100000, 00:28:13.790 "enable": false 00:28:13.790 } 00:28:13.790 }, 00:28:13.790 { 00:28:13.790 "method": "bdev_wait_for_examine" 00:28:13.790 } 00:28:13.790 ] 00:28:13.790 }, 00:28:13.790 { 00:28:13.790 "subsystem": "nbd", 00:28:13.790 "config": [] 00:28:13.790 } 00:28:13.790 ] 00:28:13.790 }' 00:28:13.790 21:50:04 keyring_file -- keyring/file.sh@114 -- # killprocess 459039 00:28:13.790 21:50:04 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 459039 ']' 00:28:13.790 21:50:04 keyring_file -- common/autotest_common.sh@952 -- # kill -0 459039 00:28:13.790 21:50:04 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:13.790 21:50:04 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:13.790 21:50:04 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 459039 00:28:13.790 21:50:04 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:13.790 21:50:04 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:13.790 21:50:04 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 459039' 00:28:13.790 killing process with pid 459039 00:28:13.790 21:50:04 keyring_file -- common/autotest_common.sh@967 -- # kill 459039 00:28:13.790 Received shutdown signal, test time was about 1.000000 seconds 00:28:13.790 00:28:13.790 Latency(us) 00:28:13.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.790 =================================================================================================================== 00:28:13.790 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:13.790 21:50:04 keyring_file -- common/autotest_common.sh@972 -- # wait 459039 00:28:13.790 21:50:04 keyring_file -- keyring/file.sh@117 -- # bperfpid=460190 00:28:13.790 21:50:04 keyring_file -- keyring/file.sh@119 -- # waitforlisten 460190 /var/tmp/bperf.sock 00:28:13.790 21:50:04 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 460190 ']' 00:28:13.790 21:50:04 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:13.790 21:50:04 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:28:13.790 21:50:04 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:13.791 21:50:04 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:28:13.791 "subsystems": [ 00:28:13.791 { 00:28:13.791 "subsystem": "keyring", 00:28:13.791 "config": [ 00:28:13.791 { 00:28:13.791 "method": "keyring_file_add_key", 00:28:13.791 "params": { 00:28:13.791 "name": "key0", 00:28:13.791 "path": "/tmp/tmp.TPTBPieW4n" 00:28:13.791 } 00:28:13.791 }, 00:28:13.791 { 00:28:13.791 "method": "keyring_file_add_key", 00:28:13.791 "params": { 00:28:13.791 "name": "key1", 00:28:13.791 "path": "/tmp/tmp.yTzLnobEOF" 00:28:13.791 } 00:28:13.791 } 00:28:13.791 ] 00:28:13.791 }, 00:28:13.791 { 00:28:13.791 "subsystem": "iobuf", 00:28:13.791 "config": [ 00:28:13.791 { 00:28:13.791 "method": "iobuf_set_options", 00:28:13.791 "params": { 00:28:13.791 "small_pool_count": 8192, 00:28:13.791 "large_pool_count": 1024, 00:28:13.791 "small_bufsize": 8192, 00:28:13.791 "large_bufsize": 135168 00:28:13.791 } 00:28:13.791 } 00:28:13.791 ] 00:28:13.791 }, 00:28:13.791 { 00:28:13.791 "subsystem": "sock", 00:28:13.791 "config": [ 00:28:13.791 { 00:28:13.791 "method": "sock_set_default_impl", 00:28:13.791 "params": { 00:28:13.791 "impl_name": "posix" 00:28:13.791 } 00:28:13.791 }, 00:28:13.791 { 00:28:13.791 "method": "sock_impl_set_options", 00:28:13.791 "params": { 00:28:13.791 "impl_name": "ssl", 00:28:13.791 "recv_buf_size": 4096, 00:28:13.791 "send_buf_size": 4096, 00:28:13.791 "enable_recv_pipe": true, 00:28:13.791 "enable_quickack": false, 00:28:13.791 "enable_placement_id": 0, 00:28:13.791 "enable_zerocopy_send_server": true, 00:28:13.791 "enable_zerocopy_send_client": false, 00:28:13.791 "zerocopy_threshold": 0, 00:28:13.791 "tls_version": 0, 00:28:13.791 "enable_ktls": false 00:28:13.791 } 00:28:13.791 }, 00:28:13.791 { 00:28:13.791 "method": "sock_impl_set_options", 00:28:13.791 "params": { 00:28:13.791 "impl_name": "posix", 00:28:13.791 "recv_buf_size": 2097152, 00:28:13.791 "send_buf_size": 2097152, 00:28:13.791 "enable_recv_pipe": true, 00:28:13.791 "enable_quickack": false, 00:28:13.791 "enable_placement_id": 0, 00:28:13.791 "enable_zerocopy_send_server": true, 00:28:13.791 "enable_zerocopy_send_client": false, 00:28:13.791 "zerocopy_threshold": 0, 00:28:13.791 "tls_version": 0, 00:28:13.791 "enable_ktls": false 00:28:13.791 } 00:28:13.791 } 00:28:13.791 ] 00:28:13.791 }, 00:28:13.791 { 00:28:13.791 "subsystem": "vmd", 00:28:13.791 "config": [] 00:28:13.791 }, 00:28:13.791 { 00:28:13.791 "subsystem": "accel", 00:28:13.791 "config": [ 00:28:13.791 { 00:28:13.791 "method": "accel_set_options", 00:28:13.791 "params": { 00:28:13.791 "small_cache_size": 128, 00:28:13.791 "large_cache_size": 16, 00:28:13.791 "task_count": 2048, 00:28:13.791 "sequence_count": 2048, 00:28:13.791 "buf_count": 2048 00:28:13.791 } 00:28:13.791 } 00:28:13.791 ] 00:28:13.791 }, 00:28:13.791 { 00:28:13.791 "subsystem": "bdev", 00:28:13.791 "config": [ 00:28:13.791 { 00:28:13.791 "method": "bdev_set_options", 00:28:13.791 "params": { 00:28:13.791 "bdev_io_pool_size": 65535, 00:28:13.791 "bdev_io_cache_size": 256, 00:28:13.791 "bdev_auto_examine": true, 00:28:13.791 "iobuf_small_cache_size": 128, 00:28:13.791 "iobuf_large_cache_size": 16 00:28:13.791 } 00:28:13.791 }, 00:28:13.791 { 00:28:13.791 "method": "bdev_raid_set_options", 00:28:13.791 "params": { 00:28:13.791 "process_window_size_kb": 1024 00:28:13.791 } 00:28:13.791 }, 00:28:13.791 { 00:28:13.791 "method": "bdev_iscsi_set_options", 00:28:13.791 "params": { 00:28:13.791 "timeout_sec": 30 00:28:13.791 } 00:28:13.791 }, 00:28:13.791 { 00:28:13.791 "method": "bdev_nvme_set_options", 00:28:13.791 "params": { 00:28:13.791 "action_on_timeout": "none", 00:28:13.791 "timeout_us": 0, 00:28:13.791 "timeout_admin_us": 0, 00:28:13.791 "keep_alive_timeout_ms": 10000, 00:28:13.791 "arbitration_burst": 0, 00:28:13.791 "low_priority_weight": 0, 00:28:13.791 "medium_priority_weight": 0, 00:28:13.791 "high_priority_weight": 0, 00:28:13.791 "nvme_adminq_poll_period_us": 10000, 00:28:13.791 "nvme_ioq_poll_period_us": 0, 00:28:13.791 "io_queue_requests": 512, 00:28:13.791 "delay_cmd_submit": true, 00:28:13.791 "transport_retry_count": 4, 00:28:13.791 "bdev_retry_count": 3, 00:28:13.791 "transport_ack_timeout": 0, 00:28:13.791 "ctrlr_loss_timeout_sec": 0, 00:28:13.791 "reconnect_delay_sec": 0, 00:28:13.791 "fast_io_fail_timeout_sec": 0, 00:28:13.791 "disable_auto_failback": false, 00:28:13.791 "generate_uuids": false, 00:28:13.791 "transport_tos": 0, 00:28:13.791 "nvme_error_stat": false, 00:28:13.791 "rdma_srq_size": 0, 00:28:13.791 "io_path_stat": false, 00:28:13.791 "allow_accel_sequence": false, 00:28:13.791 "rdma_max_cq_size": 0, 00:28:13.791 "rdma_cm_event_timeout_ms": 0, 00:28:13.791 "dhchap_digests": [ 00:28:13.791 "sha256", 00:28:13.791 "sha384", 00:28:13.791 "sha512" 00:28:13.791 ], 00:28:13.791 "dhchap_dhgroups": [ 00:28:13.791 "null", 00:28:13.791 "ffdhe2048", 00:28:13.791 "ffdhe3072", 00:28:13.791 "ffdhe4096", 00:28:13.791 "ffdhe6144", 00:28:13.791 "ffdhe8192" 00:28:13.791 ] 00:28:13.791 } 00:28:13.791 }, 00:28:13.791 { 00:28:13.791 "method": "bdev_nvme_attach_controller", 00:28:13.791 "params": { 00:28:13.791 "name": "nvme0", 00:28:13.791 "trtype": "TCP", 00:28:13.791 "adrfam": "IPv4", 00:28:13.791 "traddr": "127.0.0.1", 00:28:13.791 "trsvcid": "4420", 00:28:13.791 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:13.791 "prchk_reftag": false, 00:28:13.791 "prchk_guard": false, 00:28:13.791 "ctrlr_loss_timeout_sec": 0, 00:28:13.791 "reconnect_delay_sec": 0, 00:28:13.791 "fast_io_fail_timeout_sec": 0, 00:28:13.791 "psk": "key0", 00:28:13.791 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:13.791 "hdgst": false, 00:28:13.791 "ddgst": false 00:28:13.791 } 00:28:13.791 }, 00:28:13.791 { 00:28:13.791 "method": "bdev_nvme_set_hotplug", 00:28:13.791 "params": { 00:28:13.791 "period_us": 100000, 00:28:13.791 "enable": false 00:28:13.791 } 00:28:13.791 }, 00:28:13.791 { 00:28:13.791 "method": "bdev_wait_for_examine" 00:28:13.791 } 00:28:13.791 ] 00:28:13.791 }, 00:28:13.791 { 00:28:13.791 "subsystem": "nbd", 00:28:13.791 "config": [] 00:28:13.791 } 00:28:13.791 ] 00:28:13.791 }' 00:28:13.791 21:50:04 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:13.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:13.791 21:50:04 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:13.791 21:50:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:13.791 [2024-07-15 21:50:04.526737] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:28:13.791 [2024-07-15 21:50:04.526813] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460190 ] 00:28:13.791 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.791 [2024-07-15 21:50:04.576626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.050 [2024-07-15 21:50:04.677286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.307 [2024-07-15 21:50:04.844114] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:14.870 21:50:05 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:14.870 21:50:05 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:14.870 21:50:05 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:28:14.870 21:50:05 keyring_file -- keyring/file.sh@120 -- # jq length 00:28:14.870 21:50:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:15.128 21:50:05 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:28:15.128 21:50:05 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:28:15.128 21:50:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:15.128 21:50:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:15.128 21:50:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:15.128 21:50:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:15.128 21:50:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:15.385 21:50:06 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:28:15.385 21:50:06 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:28:15.385 21:50:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:15.385 21:50:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:15.385 21:50:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:15.385 21:50:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:15.385 21:50:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:15.643 21:50:06 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:28:15.643 21:50:06 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:28:15.643 21:50:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:28:15.643 21:50:06 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:28:15.901 21:50:06 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:28:15.902 21:50:06 keyring_file -- keyring/file.sh@1 -- # cleanup 00:28:15.902 21:50:06 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.TPTBPieW4n /tmp/tmp.yTzLnobEOF 00:28:15.902 21:50:06 keyring_file -- keyring/file.sh@20 -- # killprocess 460190 00:28:15.902 21:50:06 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 460190 ']' 00:28:15.902 21:50:06 keyring_file -- common/autotest_common.sh@952 -- # kill -0 460190 00:28:15.902 21:50:06 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:15.902 21:50:06 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:15.902 21:50:06 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 460190 00:28:15.902 21:50:06 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:15.902 21:50:06 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:15.902 21:50:06 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 460190' 00:28:15.902 killing process with pid 460190 00:28:15.902 21:50:06 keyring_file -- common/autotest_common.sh@967 -- # kill 460190 00:28:15.902 Received shutdown signal, test time was about 1.000000 seconds 00:28:15.902 00:28:15.902 Latency(us) 00:28:15.902 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.902 =================================================================================================================== 00:28:15.902 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:15.902 21:50:06 keyring_file -- common/autotest_common.sh@972 -- # wait 460190 00:28:16.160 21:50:06 keyring_file -- keyring/file.sh@21 -- # killprocess 459029 00:28:16.160 21:50:06 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 459029 ']' 00:28:16.160 21:50:06 keyring_file -- common/autotest_common.sh@952 -- # kill -0 459029 00:28:16.160 21:50:06 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:16.160 21:50:06 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:16.160 21:50:06 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 459029 00:28:16.160 21:50:06 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:16.160 21:50:06 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:16.160 21:50:06 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 459029' 00:28:16.160 killing process with pid 459029 00:28:16.160 21:50:06 keyring_file -- common/autotest_common.sh@967 -- # kill 459029 00:28:16.160 [2024-07-15 21:50:06.843134] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:16.160 21:50:06 keyring_file -- common/autotest_common.sh@972 -- # wait 459029 00:28:16.418 00:28:16.418 real 0m14.144s 00:28:16.418 user 0m35.963s 00:28:16.418 sys 0m3.114s 00:28:16.418 21:50:07 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:16.418 21:50:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:16.418 ************************************ 00:28:16.418 END TEST keyring_file 00:28:16.418 ************************************ 00:28:16.418 21:50:07 -- common/autotest_common.sh@1142 -- # return 0 00:28:16.418 21:50:07 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:28:16.418 21:50:07 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:28:16.418 21:50:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:16.418 21:50:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:16.418 21:50:07 -- common/autotest_common.sh@10 -- # set +x 00:28:16.418 ************************************ 00:28:16.418 START TEST keyring_linux 00:28:16.418 ************************************ 00:28:16.418 21:50:07 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:28:16.676 * Looking for test storage... 00:28:16.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:28:16.677 21:50:07 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:28:16.677 21:50:07 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:16.677 21:50:07 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:16.677 21:50:07 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.677 21:50:07 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.677 21:50:07 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.677 21:50:07 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.677 21:50:07 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.677 21:50:07 keyring_linux -- paths/export.sh@5 -- # export PATH 00:28:16.677 21:50:07 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:16.677 21:50:07 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:16.677 21:50:07 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:16.677 21:50:07 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:16.677 21:50:07 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:28:16.677 21:50:07 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:28:16.677 21:50:07 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:28:16.677 21:50:07 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:28:16.677 21:50:07 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:16.677 21:50:07 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:28:16.677 21:50:07 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:16.677 21:50:07 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:16.677 21:50:07 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:28:16.677 21:50:07 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@705 -- # python - 00:28:16.677 21:50:07 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:28:16.677 21:50:07 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:28:16.677 /tmp/:spdk-test:key0 00:28:16.677 21:50:07 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:28:16.677 21:50:07 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:16.677 21:50:07 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:28:16.677 21:50:07 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:16.677 21:50:07 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:16.677 21:50:07 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:28:16.677 21:50:07 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:28:16.677 21:50:07 keyring_linux -- nvmf/common.sh@705 -- # python - 00:28:16.677 21:50:07 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:28:16.677 21:50:07 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:28:16.677 /tmp/:spdk-test:key1 00:28:16.677 21:50:07 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=460566 00:28:16.677 21:50:07 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:16.677 21:50:07 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 460566 00:28:16.677 21:50:07 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 460566 ']' 00:28:16.677 21:50:07 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:16.677 21:50:07 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:16.677 21:50:07 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:16.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:16.677 21:50:07 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:16.677 21:50:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:16.677 [2024-07-15 21:50:07.414155] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:28:16.677 [2024-07-15 21:50:07.414262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460566 ] 00:28:16.677 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.935 [2024-07-15 21:50:07.477053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.935 [2024-07-15 21:50:07.595864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.194 21:50:07 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:17.194 21:50:07 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:28:17.194 21:50:07 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:28:17.194 21:50:07 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.194 21:50:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:17.194 [2024-07-15 21:50:07.834567] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:17.194 null0 00:28:17.194 [2024-07-15 21:50:07.866573] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:17.194 [2024-07-15 21:50:07.866908] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:17.194 21:50:07 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.194 21:50:07 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:28:17.194 430757553 00:28:17.194 21:50:07 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:28:17.194 954407708 00:28:17.194 21:50:07 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=460581 00:28:17.194 21:50:07 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 460581 /var/tmp/bperf.sock 00:28:17.194 21:50:07 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:28:17.194 21:50:07 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 460581 ']' 00:28:17.194 21:50:07 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:17.194 21:50:07 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:17.194 21:50:07 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:17.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:17.194 21:50:07 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:17.194 21:50:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:17.194 [2024-07-15 21:50:07.943805] Starting SPDK v24.09-pre git sha1 996bd8752 / DPDK 24.03.0 initialization... 00:28:17.194 [2024-07-15 21:50:07.943900] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460581 ] 00:28:17.194 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.451 [2024-07-15 21:50:07.999018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.451 [2024-07-15 21:50:08.098831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.451 21:50:08 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:17.451 21:50:08 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:28:17.451 21:50:08 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:28:17.452 21:50:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:28:17.709 21:50:08 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:28:17.709 21:50:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:18.275 21:50:08 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:18.275 21:50:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:18.535 [2024-07-15 21:50:09.136025] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:18.535 nvme0n1 00:28:18.535 21:50:09 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:28:18.535 21:50:09 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:28:18.535 21:50:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:18.535 21:50:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:18.535 21:50:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:18.535 21:50:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:18.793 21:50:09 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:28:18.793 21:50:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:18.793 21:50:09 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:28:18.793 21:50:09 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:28:18.793 21:50:09 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:18.793 21:50:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:18.793 21:50:09 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:28:19.050 21:50:09 keyring_linux -- keyring/linux.sh@25 -- # sn=430757553 00:28:19.050 21:50:09 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:28:19.050 21:50:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:19.050 21:50:09 keyring_linux -- keyring/linux.sh@26 -- # [[ 430757553 == \4\3\0\7\5\7\5\5\3 ]] 00:28:19.050 21:50:09 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 430757553 00:28:19.050 21:50:09 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:28:19.050 21:50:09 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:19.306 Running I/O for 1 seconds... 00:28:20.235 00:28:20.235 Latency(us) 00:28:20.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.235 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:20.235 nvme0n1 : 1.01 10972.80 42.86 0.00 0.00 11590.14 3470.98 14757.74 00:28:20.235 =================================================================================================================== 00:28:20.235 Total : 10972.80 42.86 0.00 0.00 11590.14 3470.98 14757.74 00:28:20.235 0 00:28:20.235 21:50:10 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:20.235 21:50:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:20.492 21:50:11 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:28:20.492 21:50:11 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:28:20.492 21:50:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:20.492 21:50:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:20.492 21:50:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:20.492 21:50:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:20.748 21:50:11 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:28:20.748 21:50:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:20.748 21:50:11 keyring_linux -- keyring/linux.sh@23 -- # return 00:28:20.748 21:50:11 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:20.748 21:50:11 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:28:20.748 21:50:11 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:20.748 21:50:11 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:20.748 21:50:11 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:20.748 21:50:11 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:20.748 21:50:11 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:20.748 21:50:11 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:20.748 21:50:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:21.006 [2024-07-15 21:50:11.710584] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:21.006 [2024-07-15 21:50:11.711290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1485500 (107): Transport endpoint is not connected 00:28:21.006 [2024-07-15 21:50:11.712282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1485500 (9): Bad file descriptor 00:28:21.006 [2024-07-15 21:50:11.713282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:21.006 [2024-07-15 21:50:11.713300] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:21.006 [2024-07-15 21:50:11.713313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:21.006 request: 00:28:21.006 { 00:28:21.006 "name": "nvme0", 00:28:21.006 "trtype": "tcp", 00:28:21.006 "traddr": "127.0.0.1", 00:28:21.006 "adrfam": "ipv4", 00:28:21.006 "trsvcid": "4420", 00:28:21.006 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:21.006 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:21.006 "prchk_reftag": false, 00:28:21.006 "prchk_guard": false, 00:28:21.006 "hdgst": false, 00:28:21.006 "ddgst": false, 00:28:21.006 "psk": ":spdk-test:key1", 00:28:21.006 "method": "bdev_nvme_attach_controller", 00:28:21.006 "req_id": 1 00:28:21.006 } 00:28:21.006 Got JSON-RPC error response 00:28:21.006 response: 00:28:21.006 { 00:28:21.006 "code": -5, 00:28:21.006 "message": "Input/output error" 00:28:21.006 } 00:28:21.006 21:50:11 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:28:21.006 21:50:11 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:21.006 21:50:11 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:21.006 21:50:11 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:21.006 21:50:11 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:28:21.007 21:50:11 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:21.007 21:50:11 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:28:21.007 21:50:11 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:28:21.007 21:50:11 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:28:21.007 21:50:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:21.007 21:50:11 keyring_linux -- keyring/linux.sh@33 -- # sn=430757553 00:28:21.007 21:50:11 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 430757553 00:28:21.007 1 links removed 00:28:21.007 21:50:11 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:21.007 21:50:11 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:28:21.007 21:50:11 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:28:21.007 21:50:11 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:28:21.007 21:50:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:28:21.007 21:50:11 keyring_linux -- keyring/linux.sh@33 -- # sn=954407708 00:28:21.007 21:50:11 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 954407708 00:28:21.007 1 links removed 00:28:21.007 21:50:11 keyring_linux -- keyring/linux.sh@41 -- # killprocess 460581 00:28:21.007 21:50:11 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 460581 ']' 00:28:21.007 21:50:11 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 460581 00:28:21.007 21:50:11 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:28:21.007 21:50:11 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:21.007 21:50:11 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 460581 00:28:21.007 21:50:11 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:21.007 21:50:11 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:21.007 21:50:11 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 460581' 00:28:21.007 killing process with pid 460581 00:28:21.007 21:50:11 keyring_linux -- common/autotest_common.sh@967 -- # kill 460581 00:28:21.007 Received shutdown signal, test time was about 1.000000 seconds 00:28:21.007 00:28:21.007 Latency(us) 00:28:21.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.007 =================================================================================================================== 00:28:21.007 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:21.007 21:50:11 keyring_linux -- common/autotest_common.sh@972 -- # wait 460581 00:28:21.264 21:50:11 keyring_linux -- keyring/linux.sh@42 -- # killprocess 460566 00:28:21.264 21:50:11 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 460566 ']' 00:28:21.264 21:50:11 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 460566 00:28:21.264 21:50:11 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:28:21.264 21:50:11 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:21.264 21:50:11 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 460566 00:28:21.264 21:50:11 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:21.264 21:50:11 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:21.264 21:50:11 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 460566' 00:28:21.264 killing process with pid 460566 00:28:21.264 21:50:11 keyring_linux -- common/autotest_common.sh@967 -- # kill 460566 00:28:21.264 21:50:11 keyring_linux -- common/autotest_common.sh@972 -- # wait 460566 00:28:21.522 00:28:21.522 real 0m5.076s 00:28:21.522 user 0m10.403s 00:28:21.522 sys 0m1.536s 00:28:21.522 21:50:12 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:21.522 21:50:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:21.522 ************************************ 00:28:21.522 END TEST keyring_linux 00:28:21.522 ************************************ 00:28:21.522 21:50:12 -- common/autotest_common.sh@1142 -- # return 0 00:28:21.522 21:50:12 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:28:21.522 21:50:12 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:28:21.522 21:50:12 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:28:21.522 21:50:12 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:28:21.522 21:50:12 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:28:21.522 21:50:12 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:28:21.522 21:50:12 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:28:21.522 21:50:12 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:21.522 21:50:12 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:21.522 21:50:12 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:28:21.522 21:50:12 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:28:21.522 21:50:12 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:28:21.522 21:50:12 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:21.522 21:50:12 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:21.522 21:50:12 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:28:21.522 21:50:12 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:28:21.522 21:50:12 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:28:21.522 21:50:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:21.522 21:50:12 -- common/autotest_common.sh@10 -- # set +x 00:28:21.522 21:50:12 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:28:21.522 21:50:12 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:28:21.522 21:50:12 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:28:21.522 21:50:12 -- common/autotest_common.sh@10 -- # set +x 00:28:23.433 INFO: APP EXITING 00:28:23.433 INFO: killing all VMs 00:28:23.433 INFO: killing vhost app 00:28:23.433 WARN: no vhost pid file found 00:28:23.433 INFO: EXIT DONE 00:28:24.371 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:28:24.371 0000:00:04.7 (8086 3c27): Already using the ioatdma driver 00:28:24.371 0000:00:04.6 (8086 3c26): Already using the ioatdma driver 00:28:24.371 0000:00:04.5 (8086 3c25): Already using the ioatdma driver 00:28:24.371 0000:00:04.4 (8086 3c24): Already using the ioatdma driver 00:28:24.371 0000:00:04.3 (8086 3c23): Already using the ioatdma driver 00:28:24.371 0000:00:04.2 (8086 3c22): Already using the ioatdma driver 00:28:24.371 0000:00:04.1 (8086 3c21): Already using the ioatdma driver 00:28:24.371 0000:00:04.0 (8086 3c20): Already using the ioatdma driver 00:28:24.371 0000:80:04.7 (8086 3c27): Already using the ioatdma driver 00:28:24.371 0000:80:04.6 (8086 3c26): Already using the ioatdma driver 00:28:24.371 0000:80:04.5 (8086 3c25): Already using the ioatdma driver 00:28:24.371 0000:80:04.4 (8086 3c24): Already using the ioatdma driver 00:28:24.371 0000:80:04.3 (8086 3c23): Already using the ioatdma driver 00:28:24.371 0000:80:04.2 (8086 3c22): Already using the ioatdma driver 00:28:24.371 0000:80:04.1 (8086 3c21): Already using the ioatdma driver 00:28:24.371 0000:80:04.0 (8086 3c20): Already using the ioatdma driver 00:28:25.750 Cleaning 00:28:25.750 Removing: /var/run/dpdk/spdk0/config 00:28:25.750 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:25.750 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:25.750 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:25.750 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:25.750 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:28:25.750 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:28:25.750 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:28:25.750 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:28:25.750 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:25.750 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:25.750 Removing: /var/run/dpdk/spdk1/config 00:28:25.750 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:25.750 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:25.750 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:25.750 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:25.750 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:28:25.750 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:28:25.750 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:28:25.750 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:28:25.750 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:25.750 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:25.750 Removing: /var/run/dpdk/spdk1/mp_socket 00:28:25.750 Removing: /var/run/dpdk/spdk2/config 00:28:25.750 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:25.750 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:25.750 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:25.750 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:25.750 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:28:25.750 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:28:25.750 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:28:25.750 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:28:25.750 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:25.750 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:25.750 Removing: /var/run/dpdk/spdk3/config 00:28:25.750 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:25.750 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:25.750 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:25.750 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:25.750 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:28:25.750 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:28:25.750 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:28:25.750 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:28:25.750 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:25.750 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:25.750 Removing: /var/run/dpdk/spdk4/config 00:28:25.750 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:25.750 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:25.750 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:25.750 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:25.750 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:28:25.750 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:28:25.750 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:28:25.750 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:28:25.750 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:25.750 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:25.750 Removing: /dev/shm/bdev_svc_trace.1 00:28:25.751 Removing: /dev/shm/nvmf_trace.0 00:28:25.751 Removing: /dev/shm/spdk_tgt_trace.pid253804 00:28:25.751 Removing: /var/run/dpdk/spdk0 00:28:25.751 Removing: /var/run/dpdk/spdk1 00:28:25.751 Removing: /var/run/dpdk/spdk2 00:28:25.751 Removing: /var/run/dpdk/spdk3 00:28:25.751 Removing: /var/run/dpdk/spdk4 00:28:25.751 Removing: /var/run/dpdk/spdk_pid252583 00:28:25.751 Removing: /var/run/dpdk/spdk_pid253155 00:28:25.751 Removing: /var/run/dpdk/spdk_pid253804 00:28:25.751 Removing: /var/run/dpdk/spdk_pid254172 00:28:25.751 Removing: /var/run/dpdk/spdk_pid254699 00:28:25.751 Removing: /var/run/dpdk/spdk_pid254807 00:28:25.751 Removing: /var/run/dpdk/spdk_pid255361 00:28:25.751 Removing: /var/run/dpdk/spdk_pid255376 00:28:25.751 Removing: /var/run/dpdk/spdk_pid255584 00:28:25.751 Removing: /var/run/dpdk/spdk_pid256588 00:28:25.751 Removing: /var/run/dpdk/spdk_pid257236 00:28:25.751 Removing: /var/run/dpdk/spdk_pid257480 00:28:25.751 Removing: /var/run/dpdk/spdk_pid257639 00:28:25.751 Removing: /var/run/dpdk/spdk_pid257807 00:28:25.751 Removing: /var/run/dpdk/spdk_pid257966 00:28:25.751 Removing: /var/run/dpdk/spdk_pid258090 00:28:25.751 Removing: /var/run/dpdk/spdk_pid258212 00:28:25.751 Removing: /var/run/dpdk/spdk_pid258447 00:28:25.751 Removing: /var/run/dpdk/spdk_pid258615 00:28:25.751 Removing: /var/run/dpdk/spdk_pid260765 00:28:25.751 Removing: /var/run/dpdk/spdk_pid260901 00:28:25.751 Removing: /var/run/dpdk/spdk_pid261031 00:28:25.751 Removing: /var/run/dpdk/spdk_pid261035 00:28:25.751 Removing: /var/run/dpdk/spdk_pid261285 00:28:25.751 Removing: /var/run/dpdk/spdk_pid261547 00:28:25.751 Removing: /var/run/dpdk/spdk_pid262131 00:28:25.751 Removing: /var/run/dpdk/spdk_pid262223 00:28:25.751 Removing: /var/run/dpdk/spdk_pid262359 00:28:25.751 Removing: /var/run/dpdk/spdk_pid262375 00:28:25.751 Removing: /var/run/dpdk/spdk_pid262565 00:28:25.751 Removing: /var/run/dpdk/spdk_pid262600 00:28:25.751 Removing: /var/run/dpdk/spdk_pid262899 00:28:25.751 Removing: /var/run/dpdk/spdk_pid263028 00:28:25.751 Removing: /var/run/dpdk/spdk_pid263279 00:28:25.751 Removing: /var/run/dpdk/spdk_pid263408 00:28:25.751 Removing: /var/run/dpdk/spdk_pid263437 00:28:25.751 Removing: /var/run/dpdk/spdk_pid263579 00:28:25.751 Removing: /var/run/dpdk/spdk_pid263717 00:28:25.751 Removing: /var/run/dpdk/spdk_pid263843 00:28:25.751 Removing: /var/run/dpdk/spdk_pid263964 00:28:25.751 Removing: /var/run/dpdk/spdk_pid264176 00:28:25.751 Removing: /var/run/dpdk/spdk_pid264302 00:28:25.751 Removing: /var/run/dpdk/spdk_pid264437 00:28:25.751 Removing: /var/run/dpdk/spdk_pid264588 00:28:25.751 Removing: /var/run/dpdk/spdk_pid264771 00:28:25.751 Removing: /var/run/dpdk/spdk_pid264898 00:28:25.751 Removing: /var/run/dpdk/spdk_pid265022 00:28:25.751 Removing: /var/run/dpdk/spdk_pid265226 00:28:25.751 Removing: /var/run/dpdk/spdk_pid265356 00:28:25.751 Removing: /var/run/dpdk/spdk_pid265477 00:28:25.751 Removing: /var/run/dpdk/spdk_pid265602 00:28:25.751 Removing: /var/run/dpdk/spdk_pid265815 00:28:25.751 Removing: /var/run/dpdk/spdk_pid265935 00:28:25.751 Removing: /var/run/dpdk/spdk_pid266065 00:28:25.751 Removing: /var/run/dpdk/spdk_pid266270 00:28:25.751 Removing: /var/run/dpdk/spdk_pid266405 00:28:25.751 Removing: /var/run/dpdk/spdk_pid266528 00:28:25.751 Removing: /var/run/dpdk/spdk_pid266678 00:28:25.751 Removing: /var/run/dpdk/spdk_pid266854 00:28:25.751 Removing: /var/run/dpdk/spdk_pid268390 00:28:25.751 Removing: /var/run/dpdk/spdk_pid291556 00:28:25.751 Removing: /var/run/dpdk/spdk_pid293576 00:28:25.751 Removing: /var/run/dpdk/spdk_pid298950 00:28:25.751 Removing: /var/run/dpdk/spdk_pid301395 00:28:25.751 Removing: /var/run/dpdk/spdk_pid303207 00:28:25.751 Removing: /var/run/dpdk/spdk_pid303526 00:28:25.751 Removing: /var/run/dpdk/spdk_pid306623 00:28:25.751 Removing: /var/run/dpdk/spdk_pid309531 00:28:25.751 Removing: /var/run/dpdk/spdk_pid309622 00:28:25.751 Removing: /var/run/dpdk/spdk_pid310034 00:28:25.751 Removing: /var/run/dpdk/spdk_pid310527 00:28:25.751 Removing: /var/run/dpdk/spdk_pid311030 00:28:25.751 Removing: /var/run/dpdk/spdk_pid311330 00:28:25.751 Removing: /var/run/dpdk/spdk_pid311338 00:28:25.751 Removing: /var/run/dpdk/spdk_pid311530 00:28:25.751 Removing: /var/run/dpdk/spdk_pid311637 00:28:25.751 Removing: /var/run/dpdk/spdk_pid311648 00:28:25.751 Removing: /var/run/dpdk/spdk_pid312143 00:28:25.751 Removing: /var/run/dpdk/spdk_pid312565 00:28:25.751 Removing: /var/run/dpdk/spdk_pid313058 00:28:25.751 Removing: /var/run/dpdk/spdk_pid313371 00:28:25.751 Removing: /var/run/dpdk/spdk_pid313456 00:28:25.751 Removing: /var/run/dpdk/spdk_pid313573 00:28:25.751 Removing: /var/run/dpdk/spdk_pid314400 00:28:25.751 Removing: /var/run/dpdk/spdk_pid315029 00:28:25.751 Removing: /var/run/dpdk/spdk_pid319734 00:28:25.751 Removing: /var/run/dpdk/spdk_pid319867 00:28:25.751 Removing: /var/run/dpdk/spdk_pid321893 00:28:25.751 Removing: /var/run/dpdk/spdk_pid324743 00:28:25.751 Removing: /var/run/dpdk/spdk_pid326411 00:28:25.751 Removing: /var/run/dpdk/spdk_pid331378 00:28:25.751 Removing: /var/run/dpdk/spdk_pid335378 00:28:25.751 Removing: /var/run/dpdk/spdk_pid336376 00:28:25.751 Removing: /var/run/dpdk/spdk_pid336895 00:28:25.751 Removing: /var/run/dpdk/spdk_pid345407 00:28:25.751 Removing: /var/run/dpdk/spdk_pid347030 00:28:25.751 Removing: /var/run/dpdk/spdk_pid368754 00:28:25.751 Removing: /var/run/dpdk/spdk_pid370908 00:28:25.751 Removing: /var/run/dpdk/spdk_pid371803 00:28:25.751 Removing: /var/run/dpdk/spdk_pid372814 00:28:25.751 Removing: /var/run/dpdk/spdk_pid372913 00:28:25.751 Removing: /var/run/dpdk/spdk_pid372933 00:28:25.751 Removing: /var/run/dpdk/spdk_pid373041 00:28:25.751 Removing: /var/run/dpdk/spdk_pid373376 00:28:25.751 Removing: /var/run/dpdk/spdk_pid374378 00:28:25.751 Removing: /var/run/dpdk/spdk_pid374937 00:28:25.751 Removing: /var/run/dpdk/spdk_pid375190 00:28:25.751 Removing: /var/run/dpdk/spdk_pid376507 00:28:25.751 Removing: /var/run/dpdk/spdk_pid376743 00:28:25.751 Removing: /var/run/dpdk/spdk_pid377178 00:28:25.751 Removing: /var/run/dpdk/spdk_pid379045 00:28:25.751 Removing: /var/run/dpdk/spdk_pid383658 00:28:25.751 Removing: /var/run/dpdk/spdk_pid385914 00:28:25.751 Removing: /var/run/dpdk/spdk_pid389340 00:28:25.751 Removing: /var/run/dpdk/spdk_pid390084 00:28:25.751 Removing: /var/run/dpdk/spdk_pid390941 00:28:25.751 Removing: /var/run/dpdk/spdk_pid392919 00:28:25.751 Removing: /var/run/dpdk/spdk_pid394652 00:28:25.751 Removing: /var/run/dpdk/spdk_pid397924 00:28:25.751 Removing: /var/run/dpdk/spdk_pid397926 00:28:25.751 Removing: /var/run/dpdk/spdk_pid400077 00:28:26.010 Removing: /var/run/dpdk/spdk_pid400177 00:28:26.010 Removing: /var/run/dpdk/spdk_pid400285 00:28:26.010 Removing: /var/run/dpdk/spdk_pid400487 00:28:26.010 Removing: /var/run/dpdk/spdk_pid400578 00:28:26.010 Removing: /var/run/dpdk/spdk_pid402627 00:28:26.010 Removing: /var/run/dpdk/spdk_pid402973 00:28:26.010 Removing: /var/run/dpdk/spdk_pid404944 00:28:26.010 Removing: /var/run/dpdk/spdk_pid406442 00:28:26.010 Removing: /var/run/dpdk/spdk_pid409112 00:28:26.010 Removing: /var/run/dpdk/spdk_pid411765 00:28:26.010 Removing: /var/run/dpdk/spdk_pid417496 00:28:26.010 Removing: /var/run/dpdk/spdk_pid420778 00:28:26.010 Removing: /var/run/dpdk/spdk_pid420783 00:28:26.010 Removing: /var/run/dpdk/spdk_pid430577 00:28:26.010 Removing: /var/run/dpdk/spdk_pid430885 00:28:26.010 Removing: /var/run/dpdk/spdk_pid431202 00:28:26.010 Removing: /var/run/dpdk/spdk_pid431600 00:28:26.010 Removing: /var/run/dpdk/spdk_pid432053 00:28:26.010 Removing: /var/run/dpdk/spdk_pid432368 00:28:26.010 Removing: /var/run/dpdk/spdk_pid432698 00:28:26.010 Removing: /var/run/dpdk/spdk_pid433098 00:28:26.010 Removing: /var/run/dpdk/spdk_pid434950 00:28:26.010 Removing: /var/run/dpdk/spdk_pid435149 00:28:26.010 Removing: /var/run/dpdk/spdk_pid438053 00:28:26.010 Removing: /var/run/dpdk/spdk_pid438116 00:28:26.010 Removing: /var/run/dpdk/spdk_pid439457 00:28:26.010 Removing: /var/run/dpdk/spdk_pid443442 00:28:26.010 Removing: /var/run/dpdk/spdk_pid443447 00:28:26.010 Removing: /var/run/dpdk/spdk_pid446126 00:28:26.010 Removing: /var/run/dpdk/spdk_pid447283 00:28:26.010 Removing: /var/run/dpdk/spdk_pid448363 00:28:26.010 Removing: /var/run/dpdk/spdk_pid449020 00:28:26.010 Removing: /var/run/dpdk/spdk_pid450088 00:28:26.010 Removing: /var/run/dpdk/spdk_pid450763 00:28:26.010 Removing: /var/run/dpdk/spdk_pid454845 00:28:26.010 Removing: /var/run/dpdk/spdk_pid455058 00:28:26.010 Removing: /var/run/dpdk/spdk_pid455353 00:28:26.010 Removing: /var/run/dpdk/spdk_pid456579 00:28:26.010 Removing: /var/run/dpdk/spdk_pid456888 00:28:26.010 Removing: /var/run/dpdk/spdk_pid457139 00:28:26.010 Removing: /var/run/dpdk/spdk_pid459029 00:28:26.010 Removing: /var/run/dpdk/spdk_pid459039 00:28:26.010 Removing: /var/run/dpdk/spdk_pid460190 00:28:26.010 Removing: /var/run/dpdk/spdk_pid460566 00:28:26.010 Removing: /var/run/dpdk/spdk_pid460581 00:28:26.010 Clean 00:28:26.010 21:50:16 -- common/autotest_common.sh@1451 -- # return 0 00:28:26.010 21:50:16 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:28:26.010 21:50:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:26.010 21:50:16 -- common/autotest_common.sh@10 -- # set +x 00:28:26.010 21:50:16 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:28:26.010 21:50:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:26.010 21:50:16 -- common/autotest_common.sh@10 -- # set +x 00:28:26.010 21:50:16 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:26.010 21:50:16 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:28:26.010 21:50:16 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:28:26.010 21:50:16 -- spdk/autotest.sh@391 -- # hash lcov 00:28:26.010 21:50:16 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:26.010 21:50:16 -- spdk/autotest.sh@393 -- # hostname 00:28:26.010 21:50:16 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-02 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:28:26.270 geninfo: WARNING: invalid characters removed from testname! 00:28:58.373 21:50:45 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:58.939 21:50:49 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:02.219 21:50:52 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:04.747 21:50:55 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:08.026 21:50:58 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:11.300 21:51:01 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:13.822 21:51:04 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:13.822 21:51:04 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:13.822 21:51:04 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:13.822 21:51:04 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:13.822 21:51:04 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:13.822 21:51:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.822 21:51:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.822 21:51:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.822 21:51:04 -- paths/export.sh@5 -- $ export PATH 00:29:13.822 21:51:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:13.822 21:51:04 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:29:13.822 21:51:04 -- common/autobuild_common.sh@444 -- $ date +%s 00:29:13.822 21:51:04 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721073064.XXXXXX 00:29:13.822 21:51:04 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721073064.ShULbz 00:29:13.822 21:51:04 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:29:13.822 21:51:04 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:29:13.822 21:51:04 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:29:13.822 21:51:04 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:29:13.822 21:51:04 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:29:13.822 21:51:04 -- common/autobuild_common.sh@460 -- $ get_config_params 00:29:13.822 21:51:04 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:29:13.822 21:51:04 -- common/autotest_common.sh@10 -- $ set +x 00:29:13.822 21:51:04 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:29:13.822 21:51:04 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:29:13.822 21:51:04 -- pm/common@17 -- $ local monitor 00:29:13.822 21:51:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:13.822 21:51:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:13.822 21:51:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:13.822 21:51:04 -- pm/common@21 -- $ date +%s 00:29:13.822 21:51:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:13.822 21:51:04 -- pm/common@21 -- $ date +%s 00:29:13.822 21:51:04 -- pm/common@25 -- $ sleep 1 00:29:13.822 21:51:04 -- pm/common@21 -- $ date +%s 00:29:13.822 21:51:04 -- pm/common@21 -- $ date +%s 00:29:13.822 21:51:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721073064 00:29:13.822 21:51:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721073064 00:29:13.822 21:51:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721073064 00:29:13.822 21:51:04 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721073064 00:29:13.823 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721073064_collect-vmstat.pm.log 00:29:13.823 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721073064_collect-cpu-load.pm.log 00:29:13.823 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721073064_collect-cpu-temp.pm.log 00:29:13.823 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721073064_collect-bmc-pm.bmc.pm.log 00:29:14.757 21:51:05 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:29:14.757 21:51:05 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j32 00:29:14.757 21:51:05 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:14.757 21:51:05 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:14.757 21:51:05 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:14.757 21:51:05 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:14.758 21:51:05 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:14.758 21:51:05 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:14.758 21:51:05 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:14.758 21:51:05 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:14.758 21:51:05 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:14.758 21:51:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:14.758 21:51:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:14.758 21:51:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:14.758 21:51:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:29:14.758 21:51:05 -- pm/common@44 -- $ pid=469507 00:29:14.758 21:51:05 -- pm/common@50 -- $ kill -TERM 469507 00:29:14.758 21:51:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:14.758 21:51:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:29:14.758 21:51:05 -- pm/common@44 -- $ pid=469509 00:29:14.758 21:51:05 -- pm/common@50 -- $ kill -TERM 469509 00:29:14.758 21:51:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:14.758 21:51:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:29:14.758 21:51:05 -- pm/common@44 -- $ pid=469511 00:29:14.758 21:51:05 -- pm/common@50 -- $ kill -TERM 469511 00:29:14.758 21:51:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:14.758 21:51:05 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:29:14.758 21:51:05 -- pm/common@44 -- $ pid=469543 00:29:14.758 21:51:05 -- pm/common@50 -- $ sudo -E kill -TERM 469543 00:29:15.015 + [[ -n 175973 ]] 00:29:15.015 + sudo kill 175973 00:29:15.025 [Pipeline] } 00:29:15.039 [Pipeline] // stage 00:29:15.044 [Pipeline] } 00:29:15.060 [Pipeline] // timeout 00:29:15.065 [Pipeline] } 00:29:15.081 [Pipeline] // catchError 00:29:15.086 [Pipeline] } 00:29:15.102 [Pipeline] // wrap 00:29:15.107 [Pipeline] } 00:29:15.122 [Pipeline] // catchError 00:29:15.130 [Pipeline] stage 00:29:15.132 [Pipeline] { (Epilogue) 00:29:15.145 [Pipeline] catchError 00:29:15.147 [Pipeline] { 00:29:15.161 [Pipeline] echo 00:29:15.162 Cleanup processes 00:29:15.166 [Pipeline] sh 00:29:15.449 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:15.449 469684 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:29:15.449 469724 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:15.461 [Pipeline] sh 00:29:15.747 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:15.748 ++ grep -v 'sudo pgrep' 00:29:15.748 ++ awk '{print $1}' 00:29:15.748 + sudo kill -9 469684 00:29:15.759 [Pipeline] sh 00:29:16.082 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:24.206 [Pipeline] sh 00:29:24.497 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:24.497 Artifacts sizes are good 00:29:24.515 [Pipeline] archiveArtifacts 00:29:24.522 Archiving artifacts 00:29:24.748 [Pipeline] sh 00:29:25.033 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:29:25.049 [Pipeline] cleanWs 00:29:25.060 [WS-CLEANUP] Deleting project workspace... 00:29:25.060 [WS-CLEANUP] Deferred wipeout is used... 00:29:25.067 [WS-CLEANUP] done 00:29:25.069 [Pipeline] } 00:29:25.089 [Pipeline] // catchError 00:29:25.102 [Pipeline] sh 00:29:25.387 + logger -p user.info -t JENKINS-CI 00:29:25.397 [Pipeline] } 00:29:25.414 [Pipeline] // stage 00:29:25.420 [Pipeline] } 00:29:25.438 [Pipeline] // node 00:29:25.443 [Pipeline] End of Pipeline 00:29:25.483 Finished: SUCCESS